\begin{tabular}{c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf Moderate} & {\bf Easy} & {\bf Hard} & {\bf Runtime} & {\bf Environment}\\ \hline
HotSpotNet & & 45.37 \% & 53.10 \% & 41.47 \% & 0.04 s / 1 core & Q. Chen, L. Sun, Z. Wang, K. Jia and A. Yuille: object as hotspots. Proceedings of the European Conference on Computer Vision (ECCV) 2020.\\
Noah CV Lab - SSL & & 45.23 \% & 52.85 \% & 41.28 \% & 0.1 s / GPU & \\
VICNet & & 44.80 \% & 54.00 \% & 41.11 \% & 0.05 s / 1 core & \\
TANet & & 44.34 \% & 53.72 \% & 40.49 \% & 0.035s / GPU & Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou and X. Bai: TANet: Robust 3D Object Detection from Point Clouds with Triple Attention. AAAI 2020.\\
3DSSD & & 44.27 \% & 54.64 \% & 40.23 \% & 0.04 s / GPU & Z. Yang, Y. Sun, S. Liu and J. Jia: 3DSSD: Point-based 3D Single Stage Object Detector. CVPR 2020.\\
PPBA & & 44.08 \% & 52.65 \% & 41.54 \% & NA s / GPU & \\
CentrNet-FG & & 44.02 \% & 53.51 \% & 40.53 \% & 0.03 s / 1 core & \\
Point-GNN & la & 43.77 \% & 51.92 \% & 40.14 \% & 0.6 s / GPU & W. Shi and R. Rajkumar: Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud. CVPR 2020.\\
PP-3D & & 43.77 \% & 51.92 \% & 40.14 \% & 0.1 s / 1 core & \\
MVX-Net++ & & 43.73 \% & 50.90 \% & 39.96 \% & 0.15 s / 1 core & \\
KNN-GCNN & & 43.57 \% & 51.82 \% & 40.02 \% & 0.4 s / 1 core & \\
F-ConvNet & la & 43.38 \% & 52.16 \% & 38.80 \% & 0.47 s / GPU & Z. Wang and K. Jia: Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection. IROS 2019.\\
MMLab-PartA^2 & la & 43.35 \% & 53.10 \% & 40.06 \% & 0.08 s / GPU & S. Shi, Z. Wang, J. Shi, X. Wang and H. Li: From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020.\\
MMLab PV-RCNN & la & 43.29 \% & 52.17 \% & 40.29 \% & 0.08 s / 1 core & S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang and H. Li: PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. CVPR 2020.\\
VMVS & la & 43.27 \% & 53.44 \% & 39.51 \% & 0.25 s / GPU & J. Ku, A. Pon, S. Walsh and S. Waslander: Improving 3D object detection for pedestrians with virtual multi-view synthesis orientation estimation. IROS 2019.\\
RethinkDet3D & & 43.25 \% & 53.13 \% & 40.58 \% & 0.15 s / 1 core & \\
STD & & 42.47 \% & 53.29 \% & 38.35 \% & 0.08 s / GPU & Z. Yang, Y. Sun, S. Liu, X. Shen and J. Jia: STD: Sparse-to-Dense 3D Object Detector for Point Cloud. ICCV 2019.\\
AVOD-FPN & la & 42.27 \% & 50.46 \% & 39.04 \% & 0.1 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
SemanticVoxels & & 42.19 \% & 50.90 \% & 39.52 \% & 0.04 s / GPU & J. Fei, W. Chen, P. Heidenreich, S. Wirges and C. Stiller: SemanticVoxels: Sequential Fusion for 3D Pedestrian Detection using LiDAR Point Cloud and Semantic Segmentation. MFI 2020.\\
F-PointNet & la & 42.15 \% & 50.53 \% & 38.08 \% & 0.17 s / GPU & C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.\\
PointPillars & la & 41.92 \% & 51.45 \% & 38.89 \% & 16 ms / & A. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom: PointPillars: Fast Encoders for Object Detection from Point Clouds. CVPR 2019.\\
epBRM & la & 41.52 \% & 49.17 \% & 39.08 \% & 0.10 s / 1 core & K. Shin: Improving a Quality of 3D Object Detection by Spatial Transformation Mechanism. arXiv preprint arXiv:1910.04853 2019.\\
TBU & & 41.16 \% & 49.33 \% & 38.84 \% & NA s / GPU & \\
PiP & & 41.01 \% & 49.01 \% & 37.90 \% & 0.033 s / 1 core & \\
PointPainting & la & 40.97 \% & 50.32 \% & 37.87 \% & 0.4 s / GPU & S. Vora, A. Lang, B. Helou and O. Beijbom: PointPainting: Sequential Fusion for 3D Object Detection. CVPR 2020.\\
Deformable PV-RCNN & la & 40.89 \% & 46.97 \% & 38.80 \% & 0.08 s / 1 core & P. Bhattacharyya and K. Czarnecki: Deformable PV-RCNN: Improving 3D Object Detection with Learned Deformations. ECCV 2020 Perception for Autonomous Driving Workshop.\\
Simple3D Net & & 40.20 \% & 48.41 \% & 37.50 \% & 0.02 s / GPU & \\
PPFNet & & 40.11 \% & 48.36 \% & 37.00 \% & 0.1 s / 1 core & \\
AP-RCNN & & 39.53 \% & 47.63 \% & 36.44 \% & 0.02 s / 1 core & \\
IC-PVRCNN & & 39.46 \% & 45.19 \% & 37.16 \% & 0.08 s / 1 core & \\
SVGA-Net & la & 39.43 \% & 47.30 \% & 36.99 \% & 0.08 s / GPU & \\
Baseline of CA RCNN & & 39.42 \% & 47.30 \% & 36.97 \% & 0.1 s / GPU & \\
CVIS-DF3D & & 39.42 \% & 47.30 \% & 36.97 \% & 0.05 s / 1 core & \\
MMLab-PointRCNN & la & 39.37 \% & 47.98 \% & 36.01 \% & 0.1 s / GPU & S. Shi, X. Wang and H. Li: Pointrcnn: 3d object proposal generation and detection from point cloud. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.\\
ARPNET & & 39.31 \% & 48.32 \% & 35.93 \% & 0.08 s / GPU & Y. Ye, C. Zhang and X. Hao: ARPNET: attention region proposal network for 3D object detection. Science China Information Sciences 2019.\\
SCNet & la & 38.66 \% & 47.83 \% & 35.70 \% & 0.04 s / GPU & Z. Wang, H. Fu, L. Wang, L. Xiao and B. Dai: SCNet: Subdivision Coding Network for Object Detection Based on 3D Point Cloud. IEEE Access 2019.\\
CVIS-DF3D\_v2 & & 38.31 \% & 45.10 \% & 36.15 \% & 0.05 s / 1 core & \\
3DBN\_2 & & 38.23 \% & 46.79 \% & 35.57 \% & 0.12 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
IGRP+ & & 38.05 \% & 46.26 \% & 34.53 \% & 0.18 s / 1 core & \\
MGACNet & & 37.50 \% & 43.55 \% & 35.33 \% & 0.05 s / 1 core & \\
MLOD & la & 37.47 \% & 47.58 \% & 35.07 \% & 0.12 s / GPU & J. Deng and K. Czarnecki: MLOD: A multi-view 3D object detection based on robust feature fusion method. arXiv preprint arXiv:1909.04163 2019.\\
TBD & & 37.37 \% & 43.60 \% & 34.36 \% & 0.05 s / GPU & \\
IC-SECOND & & 37.18 \% & 43.82 \% & 35.35 \% & 0.06 s / 1 core & \\
VOXEL\_FPN\_HR & & 37.01 \% & 46.32 \% & 34.67 \% & 0.12 s / 8 cores & ERROR: Wrong syntax in BIBTEX file.\\
PFF3D & la & 36.07 \% & 43.93 \% & 32.86 \% & 0.05 s / GPU & \\
NLK-3D & & 35.86 \% & 45.17 \% & 32.24 \% & 0.04 s / 1 core & \\
HR-SECOND & & 35.52 \% & 45.31 \% & 33.14 \% & 0.11 s / 1 core & \\
SRDL & st la & 35.28 \% & 42.66 \% & 33.26 \% & 0.15 s / GPU & \\
deprecated & & 35.21 \% & 41.32 \% & 33.32 \% & 0.06 s / 1 core & \\
AB3DMOT & la on & 34.59 \% & 42.27 \% & 31.37 \% & 0.0047s / 1 core & X. Weng and K. Kitani: A Baseline for 3D Multi-Object Tracking. arXiv:1907.03961 2019.\\
PBASN & & 34.48 \% & 41.28 \% & 32.24 \% & NA s / GPU & \\
NLK-ALL & & 34.46 \% & 44.30 \% & 30.83 \% & 0.04 s / 1 core & \\
DAMNET & & 33.66 \% & 43.32 \% & 30.12 \% & 1 s / 1 core & \\
LZnet & & 33.55 \% & 39.51 \% & 31.15 \% & 0.08 s / 1 core & \\
BirdNet+ & la & 31.46 \% & 37.99 \% & 29.46 \% & 0.1 s / & A. Barrera, C. Guindel, J. Beltrán and F. García: BirdNet+: End-to-End 3D Object Detection in LiDAR Bird's Eye View. arXiv:2003.04188 [cs.CV] 2020.\\
Pointpillar\_TV & & 30.79 \% & 38.56 \% & 28.57 \% & 0.05 s / 1 core & \\
SparsePool & & 30.38 \% & 37.84 \% & 26.94 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
FCY & la & 29.38 \% & 37.28 \% & 26.19 \% & 0.02 s / GPU & \\
SparsePool & & 27.92 \% & 35.52 \% & 25.87 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
AVOD & la & 27.86 \% & 36.10 \% & 25.76 \% & 0.08 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
CSW3D & la & 26.64 \% & 33.75 \% & 23.34 \% & 0.03 s / 4 cores & J. Hu, T. Wu, H. Fu, Z. Wang and K. Ding: Cascaded Sliding Window Based Real-Time 3D Region Proposal for Pedestrian Detection. ROBIO 2019.\\
SF & st la & 24.84 \% & 31.61 \% & 21.96 \% & 0.5 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
CG-Stereo & st & 24.31 \% & 33.22 \% & 20.95 \% & 0.57 s / & C. Li, J. Ku and S. Waslander: Confidence Guided Stereo 3D Object Detection with Split Depth Estimation. IROS 2020.\\
Disp R-CNN (velo) & st & 21.98 \% & 30.98 \% & 18.68 \% & 0.42 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
Disp R-CNN & st & 21.98 \% & 31.05 \% & 18.67 \% & 0.42 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
PB3D & st & 20.65 \% & 28.68 \% & 17.65 \% & 0.42 s / 1 core & \\
Stereo3D & st & 19.75 \% & 28.49 \% & 16.48 \% & 0.1 s / & \\
OC Stereo & st & 17.58 \% & 24.48 \% & 15.60 \% & 0.35 s / 1 core & A. Pon, J. Ku, C. Li and S. Waslander: Object-Centric Stereo Matching for 3D Object Detection. ICRA 2020.\\
BirdNet & la & 17.08 \% & 22.04 \% & 15.82 \% & 0.11 s / & J. Beltrán, C. Guindel, F. Moreno, D. Cruzado, F. García and A. Escalera: BirdNet: A 3D Object Detection Framework from LiDAR Information. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
DSGN & st & 15.55 \% & 20.53 \% & 14.15 \% & 0.67 s / & Y. Chen, S. Liu, X. Shen and J. Jia: DSGN: Deep Stereo Geometry Network for 3D Object Detection. CVPR 2020.\\
Complexer-YOLO & la & 13.96 \% & 17.60 \% & 12.70 \% & 0.06 s / GPU & M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.\\
RefinedMPL & & 7.18 \% & 11.14 \% & 5.84 \% & 0.15 s / GPU & J. Vianney, S. Aich and B. Liu: RefinedMPL: Refined Monocular PseudoLiDAR for 3D Object Detection in Autonomous Driving. arXiv preprint arXiv:1911.09712 2019.\\
TopNet-HighRes & la & 6.92 \% & 10.40 \% & 6.63 \% & 101ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
MonoPair & & 6.68 \% & 10.02 \% & 5.53 \% & 0.06 s / GPU & Y. Chen, L. Tai, K. Sun and M. Li: MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.\\
SS3D\_HW & & 5.00 \% & 7.77 \% & 4.03 \% & 0.4 s / GPU & \\
Shift R-CNN (mono) & & 4.66 \% & 7.95 \% & 4.16 \% & 0.25 s / GPU & A. Naiden, V. Paunescu, G. Kim, B. Jeon and M. Leordeanu: Shift R-CNN: Deep Monocular 3D Object Detection With Closed-form Geometric Constraints. ICIP 2019.\\
PG-MonoNet & & 4.50 \% & 5.76 \% & 3.93 \% & 0.19 s / GPU & \\
CDI3D & & 4.03 \% & 5.64 \% & 3.29 \% & 0.03 s / GPU & \\
MonoPSR & & 4.00 \% & 6.12 \% & 3.30 \% & 0.2 s / GPU & J. Ku*, A. Pon* and S. Waslander: Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction. CVPR 2019.\\
NL\_M3D & & 3.87 \% & 5.16 \% & 3.08 \% & 0.2 s / 1 core & \\
MP-Mono & & 3.79 \% & 5.30 \% & 3.15 \% & 0.16 s / GPU & \\
DP3D & & 3.54 \% & 4.75 \% & 2.88 \% & 0.05 s / GPU & \\
M3D-RPN & & 3.48 \% & 4.92 \% & 2.94 \% & 0.16 s / GPU & G. Brazil and X. Liu: M3D-RPN: Monocular 3D Region Proposal Network for Object Detection . ICCV 2019 .\\
Mono3CN & & 3.44 \% & 5.13 \% & 3.00 \% & 0.1 s / 1 core & \\
Center3D & & 3.43 \% & 4.86 \% & 2.78 \% & 0.05 s / GPU & \\
RT3D-GMP & st & 3.42 \% & 4.51 \% & 2.77 \% & 0.06 s / GPU & \\
D4LCN & & 3.42 \% & 4.55 \% & 2.83 \% & 0.2 s / GPU & M. Ding, Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu and P. Luo: Learning Depth-Guided Convolutions for Monocular 3D Object Detection. CVPR 2020.\\
DP3D & & 3.37 \% & 4.77 \% & 2.77 \% & 0.07 s / GPU & \\
LAPNet & & 3.16 \% & 4.41 \% & 2.70 \% & 0.03 s / 1 core & \\
RT3DStereo & st & 2.45 \% & 3.28 \% & 2.35 \% & 0.08 s / GPU & H. Königshof, N. Salscheider and C. Stiller: Realtime 3D Object Detection for Automated Driving Using Stereo Vision and Semantic Information. Proc. IEEE Intl. Conf. Intelligent Transportation Systems 2019.\\
MTMono3d & & 2.05 \% & 2.40 \% & 1.68 \% & 0.05 s / 1 core & \\
TopNet-UncEst & la & 1.87 \% & 3.42 \% & 1.73 \% & 0.09 s / & S. Wirges, M. Braun, M. Lauer and C. Stiller: Capturing Object Detection Uncertainty in Multi-Layer Grid Maps. 2019.\\
SS3D & & 1.78 \% & 2.31 \% & 1.48 \% & 48 ms / & E. Jörgensen, C. Zach and F. Kahl: Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss. CoRR 2019.\\
UM3D\_TUM & & 1.74 \% & 3.49 \% & 1.74 \% & 0.05 s / 1 core & \\
UDI-mono3D & & 1.01 \% & 1.81 \% & 0.99 \% & 0.05 s / 1 core & \\
SparVox3D & & 0.25 \% & 0.35 \% & 0.25 \% & 0.05 s / GPU & \\
PVNet & & 0.00 \% & 0.00 \% & 0.00 \% & 0,1 s / 1 core & \\
mBoW & la & 0.00 \% & 0.00 \% & 0.00 \% & 10 s / 1 core & J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
\end{tabular}