\begin{tabular}{c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf Moderate} & {\bf Easy} & {\bf Hard} & {\bf Runtime} & {\bf Environment}\\ \hline
HRI-MSP-L & la & 75.24 \% & 89.91 \% & 67.01 \% & 0.07 s / 1 core & \\
Noah CV Lab - SSL & & 74.45 \% & 85.96 \% & 64.23 \% & 0.1 s / GPU & \\
Deformable PV-RCNN & la & 72.61 \% & 83.93 \% & 65.82 \% & 0.08 s / 1 core & P. Bhattacharyya and K. Czarnecki: Deformable PV-RCNN: Improving 3D Object Detection with Learned Deformations. ECCV 2020 Perception for Autonomous Driving Workshop.\\
PointPainting & la & 71.54 \% & 83.91 \% & 62.97 \% & 0.4 s / GPU & S. Vora, A. Lang, B. Helou and O. Beijbom: PointPainting: Sequential Fusion for 3D Object Detection. CVPR 2020.\\
HVNet & & 71.17 \% & 83.97 \% & 63.65 \% & 0.03 s / GPU & M. Ye, S. Xu and T. Cao: HVNet: Hybrid Voxel Network for LiDAR Based 3D Object Detection. CVPR 2020.\\
IC-PVRCNN & & 70.05 \% & 85.46 \% & 63.44 \% & 0.08 s / 1 core & \\
TBD & & 69.08 \% & 83.68 \% & 62.28 \% & 0.05 s / GPU & \\
MMLab PV-RCNN & la & 68.89 \% & 82.49 \% & 62.41 \% & 0.08 s / 1 core & S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang and H. Li: PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. CVPR 2020.\\
F-ConvNet & la & 68.88 \% & 84.16 \% & 60.05 \% & 0.47 s / GPU & Z. Wang and K. Jia: Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection. IROS 2019.\\
MMLab-PartA^2 & la & 68.73 \% & 83.43 \% & 61.85 \% & 0.08 s / GPU & S. Shi, Z. Wang, J. Shi, X. Wang and H. Li: From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020.\\
HotSpotNet & & 68.51 \% & 83.29 \% & 61.84 \% & 0.04 s / 1 core & Q. Chen, L. Sun, Z. Wang, K. Jia and A. Yuille: object as hotspots. Proceedings of the European Conference on Computer Vision (ECCV) 2020.\\
NLK-ALL & & 68.30 \% & 83.07 \% & 60.31 \% & 0.04 s / 1 core & \\
CVIS-DF3D\_v2 & & 68.21 \% & 80.74 \% & 60.44 \% & 0.05 s / 1 core & \\
IC-SECOND & & 67.98 \% & 81.50 \% & 60.82 \% & 0.06 s / 1 core & \\
3DSSD & & 67.62 \% & 85.04 \% & 61.14 \% & 0.04 s / GPU & Z. Yang, Y. Sun, S. Liu and J. Jia: 3DSSD: Point-based 3D Single Stage Object Detector. CVPR 2020.\\
MGACNet & & 67.40 \% & 82.29 \% & 60.71 \% & 0.05 s / 1 core & \\
PPBA & & 67.28 \% & 82.69 \% & 60.53 \% & NA s / GPU & \\
TBU & & 67.28 \% & 82.69 \% & 60.53 \% & NA s / GPU & \\
Point-GNN & la & 67.28 \% & 81.17 \% & 59.67 \% & 0.6 s / GPU & W. Shi and R. Rajkumar: Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud. CVPR 2020.\\
PP-3D & & 67.28 \% & 81.17 \% & 59.67 \% & 0.1 s / 1 core & \\
MMLab-PointRCNN & la & 67.24 \% & 82.56 \% & 60.28 \% & 0.1 s / GPU & S. Shi, X. Wang and H. Li: Pointrcnn: 3d object proposal generation and detection from point cloud. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.\\
STD & & 67.23 \% & 81.36 \% & 59.35 \% & 0.08 s / GPU & Z. Yang, Y. Sun, S. Liu, X. Shen and J. Jia: STD: Sparse-to-Dense 3D Object Detector for Point Cloud. ICCV 2019.\\
KNN-GCNN & & 67.22 \% & 83.35 \% & 59.51 \% & 0.4 s / 1 core & \\
deprecated & & 66.47 \% & 78.62 \% & 60.14 \% & 0.06 s / 1 core & \\
SRDL & st la & 66.44 \% & 83.57 \% & 59.79 \% & 0.15 s / GPU & \\
RethinkDet3D & & 66.42 \% & 82.73 \% & 59.60 \% & 0.15 s / 1 core & \\
ARPNET & & 66.39 \% & 82.32 \% & 58.80 \% & 0.08 s / GPU & Y. Ye, C. Zhang and X. Hao: ARPNET: attention region proposal network for 3D object detection. Science China Information Sciences 2019.\\
AB3DMOT & la on & 65.85 \% & 80.00 \% & 58.69 \% & 0.0047s / 1 core & X. Weng and K. Kitani: A Baseline for 3D Multi-Object Tracking. arXiv:1907.03961 2019.\\
PiP & & 65.12 \% & 79.51 \% & 58.25 \% & 0.033 s / 1 core & \\
VOXEL\_FPN\_HR & & 65.02 \% & 81.07 \% & 58.44 \% & 0.12 s / 8 cores & ERROR: Wrong syntax in BIBTEX file.\\
MVX-Net++ & & 64.84 \% & 78.89 \% & 58.15 \% & 0.15 s / 1 core & \\
Baseline of CA RCNN & & 64.53 \% & 79.62 \% & 57.91 \% & 0.1 s / GPU & \\
CVIS-DF3D & & 64.53 \% & 79.62 \% & 57.91 \% & 0.05 s / 1 core & \\
SVGA-Net & la & 64.52 \% & 79.64 \% & 57.90 \% & 0.08 s / GPU & \\
3DBN\_2 & & 64.28 \% & 81.06 \% & 57.55 \% & 0.12 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
HR-SECOND & & 64.21 \% & 78.79 \% & 57.82 \% & 0.11 s / 1 core & \\
LZnet & & 63.89 \% & 78.17 \% & 56.73 \% & 0.08 s / 1 core & \\
TANet & & 63.77 \% & 79.16 \% & 56.21 \% & 0.035s / GPU & Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou and X. Bai: TANet: Robust 3D Object Detection from Point Clouds with Triple Attention. AAAI 2020.\\
PBASN & & 63.34 \% & 79.45 \% & 57.01 \% & NA s / GPU & \\
VICNet & & 63.21 \% & 82.22 \% & 56.41 \% & 0.05 s / 1 core & \\
NLK-3D & & 62.97 \% & 80.61 \% & 56.52 \% & 0.04 s / 1 core & \\
PointPillars & la & 62.73 \% & 79.90 \% & 55.58 \% & 16 ms / & A. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom: PointPillars: Fast Encoders for Object Detection from Point Clouds. CVPR 2019.\\
AP-RCNN & & 62.49 \% & 78.64 \% & 55.87 \% & 0.02 s / 1 core & \\
FCY & la & 62.25 \% & 78.65 \% & 54.74 \% & 0.02 s / GPU & \\
CentrNet-FG & & 62.10 \% & 76.94 \% & 54.94 \% & 0.03 s / 1 core & \\
F-PointNet & la & 61.37 \% & 77.26 \% & 53.78 \% & 0.17 s / GPU & C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.\\
epBRM & la & 59.79 \% & 75.13 \% & 53.36 \% & 0.10 s / 1 core & K. Shin: Improving a Quality of 3D Object Detection by Spatial Transformation Mechanism. arXiv preprint arXiv:1910.04853 2019.\\
Pointpillar\_TV & & 59.26 \% & 74.78 \% & 52.33 \% & 0.05 s / 1 core & \\
Simple3D Net & & 59.03 \% & 75.72 \% & 52.42 \% & 0.02 s / GPU & \\
IGRP+ & & 57.94 \% & 76.25 \% & 51.86 \% & 0.18 s / 1 core & \\
AVOD-FPN & la & 57.12 \% & 69.39 \% & 51.09 \% & 0.1 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
SCNet & la & 56.39 \% & 73.73 \% & 49.99 \% & 0.04 s / GPU & Z. Wang, H. Fu, L. Wang, L. Xiao and B. Dai: SCNet: Subdivision Coding Network for Object Detection Based on 3D Point Cloud. IEEE Access 2019.\\
PFF3D & la & 55.71 \% & 72.67 \% & 49.58 \% & 0.05 s / GPU & \\
MLOD & la & 55.06 \% & 73.03 \% & 48.21 \% & 0.12 s / GPU & J. Deng and K. Czarnecki: MLOD: A multi-view 3D object detection based on robust feature fusion method. arXiv preprint arXiv:1909.04163 2019.\\
BirdNet+ & la & 52.15 \% & 72.45 \% & 46.57 \% & 0.1 s / & A. Barrera, C. Guindel, J. Beltrán and F. García: BirdNet+: End-to-End 3D Object Detection in LiDAR Bird's Eye View. arXiv:2003.04188 [cs.CV] 2020.\\
DAMNET & & 49.71 \% & 67.52 \% & 45.21 \% & 1 s / 1 core & \\
AVOD & la & 48.15 \% & 64.11 \% & 42.37 \% & 0.08 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
BirdNet & la & 41.56 \% & 58.64 \% & 36.94 \% & 0.11 s / & J. Beltrán, C. Guindel, F. Moreno, D. Cruzado, F. García and A. Escalera: BirdNet: A 3D Object Detection Framework from LiDAR Information. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
SparsePool & & 40.74 \% & 56.52 \% & 36.68 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
TopNet-Retina & la & 36.83 \% & 47.48 \% & 33.58 \% & 52ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
CG-Stereo & st & 36.25 \% & 55.33 \% & 32.17 \% & 0.57 s / & C. Li, J. Ku and S. Waslander: Confidence Guided Stereo 3D Object Detection with Split Depth Estimation. IROS 2020.\\
SparsePool & & 35.24 \% & 43.55 \% & 30.15 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
Disp R-CNN (velo) & st & 26.46 \% & 43.41 \% & 22.46 \% & 0.42 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
Disp R-CNN & st & 26.46 \% & 43.41 \% & 22.46 \% & 0.42 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
Complexer-YOLO & la & 25.43 \% & 32.00 \% & 22.88 \% & 0.06 s / GPU & M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.\\
DSGN & st & 21.04 \% & 31.23 \% & 18.93 \% & 0.67 s / & Y. Chen, S. Liu, X. Shen and J. Jia: DSGN: Deep Stereo Geometry Network for 3D Object Detection. CVPR 2020.\\
PB3D & st & 19.41 \% & 32.06 \% & 17.42 \% & 0.42 s / 1 core & \\
OC Stereo & st & 19.23 \% & 32.47 \% & 17.11 \% & 0.35 s / 1 core & A. Pon, J. Ku, C. Li and S. Waslander: Object-Centric Stereo Matching for 3D Object Detection. ICRA 2020.\\
TopNet-DecayRate & la & 16.00 \% & 23.02 \% & 13.24 \% & 92 ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
TopNet-UncEst & la & 9.18 \% & 12.31 \% & 8.14 \% & 0.09 s / & S. Wirges, M. Braun, M. Lauer and C. Stiller: Capturing Object Detection Uncertainty in Multi-Layer Grid Maps. 2019.\\
RT3D-GMP & st & 6.90 \% & 10.09 \% & 6.14 \% & 0.06 s / GPU & \\
TopNet-HighRes & la & 6.48 \% & 9.99 \% & 6.76 \% & 101ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
MonoPSR & & 5.78 \% & 9.87 \% & 4.57 \% & 0.2 s / GPU & J. Ku*, A. Pon* and S. Waslander: Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction. CVPR 2019.\\
RT3DStereo & st & 4.10 \% & 7.03 \% & 3.88 \% & 0.08 s / GPU & H. Königshof, N. Salscheider and C. Stiller: Realtime 3D Object Detection for Automated Driving Using Stereo Vision and Semantic Information. Proc. IEEE Intl. Conf. Intelligent Transportation Systems 2019.\\
CDI3D & & 3.78 \% & 6.01 \% & 3.24 \% & 0.03 s / GPU & \\
MonoPair & & 2.87 \% & 4.76 \% & 2.42 \% & 0.06 s / GPU & Y. Chen, L. Tai, K. Sun and M. Li: MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.\\
SS3D\_HW & & 2.78 \% & 5.03 \% & 2.36 \% & 0.4 s / GPU & \\
Center3D & & 2.76 \% & 5.28 \% & 2.72 \% & 0.05 s / GPU & \\
Mono3CN & & 2.69 \% & 3.92 \% & 2.19 \% & 0.1 s / 1 core & \\
RefinedMPL & & 2.42 \% & 4.23 \% & 2.14 \% & 0.15 s / GPU & J. Vianney, S. Aich and B. Liu: RefinedMPL: Refined Monocular PseudoLiDAR for 3D Object Detection in Autonomous Driving. arXiv preprint arXiv:1911.09712 2019.\\
UDI-mono3D & & 2.01 \% & 3.59 \% & 1.79 \% & 0.05 s / 1 core & \\
NL\_M3D & & 2.01 \% & 2.70 \% & 1.75 \% & 0.2 s / 1 core & \\
PG-MonoNet & & 1.89 \% & 3.00 \% & 1.66 \% & 0.19 s / GPU & \\
SS3D & & 1.89 \% & 3.45 \% & 1.44 \% & 48 ms / & E. Jörgensen, C. Zach and F. Kahl: Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss. CoRR 2019.\\
DP3D & & 1.87 \% & 3.09 \% & 1.96 \% & 0.07 s / GPU & \\
D4LCN & & 1.82 \% & 2.72 \% & 1.79 \% & 0.2 s / GPU & M. Ding, Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu and P. Luo: Learning Depth-Guided Convolutions for Monocular 3D Object Detection. CVPR 2020.\\
MP-Mono & & 1.58 \% & 2.43 \% & 1.70 \% & 0.16 s / GPU & \\
DP3D & & 1.57 \% & 2.32 \% & 1.29 \% & 0.05 s / GPU & \\
MTMono3d & & 1.30 \% & 2.06 \% & 1.06 \% & 0.05 s / 1 core & \\
LAPNet & & 1.03 \% & 1.71 \% & 1.04 \% & 0.03 s / 1 core & \\
M3D-RPN & & 0.81 \% & 1.25 \% & 0.78 \% & 0.16 s / GPU & G. Brazil and X. Liu: M3D-RPN: Monocular 3D Region Proposal Network for Object Detection . ICCV 2019 .\\
UM3D\_TUM & & 0.62 \% & 0.45 \% & 0.62 \% & 0.05 s / 1 core & \\
Shift R-CNN (mono) & & 0.38 \% & 0.76 \% & 0.41 \% & 0.25 s / GPU & A. Naiden, V. Paunescu, G. Kim, B. Jeon and M. Leordeanu: Shift R-CNN: Deep Monocular 3D Object Detection With Closed-form Geometric Constraints. ICIP 2019.\\
PVNet & & 0.00 \% & 0.00 \% & 0.00 \% & 0,1 s / 1 core & \\
mBoW & la & 0.00 \% & 0.00 \% & 0.00 \% & 10 s / 1 core & J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
\end{tabular}