\begin{tabular}{c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf Moderate} & {\bf Easy} & {\bf Hard} & {\bf Runtime} & {\bf Environment}\\ \hline
UPIDet & & 78.19 \% & 89.65 \% & 71.13 \% & 0.11 s / 1 core & Y. Zhang, Q. Zhang, J. Hou, Y. Yuan and G. Xing: Unleash the Potential of Image Branch for Cross-modal 3D Object Detection. Thirty-seventh Conference on Neural Information Processing Systems 2023.\\
CasA++ & & 76.99 \% & 88.93 \% & 70.10 \% & 0.1 s / 1 core & H. Wu, J. Deng, C. Wen, X. Li and C. Wang: CasA: A Cascade Attention Network for 3D Object Detection from LiDAR point clouds. IEEE Transactions on Geoscience and Remote Sensing 2022.\\
TED & & 76.95 \% & 89.54 \% & 70.31 \% & 0.1 s / 1 core & H. Wu, C. Wen, W. Li, R. Yang and C. Wang: Transformation-Equivariant 3D Object Detection for Autonomous Driving. AAAI 2023.\\
CasA & & 75.74 \% & 88.99 \% & 68.47 \% & 0.1 s / 1 core & H. Wu, J. Deng, C. Wen, X. Li and C. Wang: CasA: A Cascade Attention Network for 3D Object Detection from LiDAR point clouds. IEEE Transactions on Geoscience and Remote Sensing 2022.\\
LoGoNet & & 74.92 \% & 85.85 \% & 67.62 \% & 0.1 s / 1 core & X. Li, T. Ma, Y. Hou, B. Shi, Y. Yang, Y. Liu, X. Wu, Q. Chen, Y. Li, Y. Qiao and others: LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion. CVPR 2023.\\
MLF-DET & & 74.88 \% & 86.20 \% & 66.75 \% & 0.09 s / 1 core & Z. Lin, Y. Shen, S. Zhou, S. Chen and N. Zheng: MLF-DET: Multi-Level Fusion for Cross- Modal 3D Object Detection. International Conference on Artificial Neural Networks 2023.\\
USVLab BSAODet & & 74.38 \% & 85.01 \% & 67.38 \% & 0.04 s / 1 core & W. Xiao, Y. Peng, C. Liu, J. Gao, Y. Wu and X. Li: Balanced Sample Assignment and Objective for Single-Model Multi-Class 3D Object Detection. IEEE Transactions on Circuits and Systems for Video Technology 2023.\\
PSMS-Net & la & 74.30 \% & 85.06 \% & 66.34 \% & 0.1 s / 1 core & \\
HMFI & & 74.06 \% & 85.69 \% & 67.11 \% & 0.1 s / 1 core & X. Li, B. Shi, Y. Hou, X. Wu, T. Ma, Y. Li and L. He: Homogeneous Multi-modal Feature Fusion and Interaction for 3D Object Detection. ECCV 2022.\\
VPA & & 73.91 \% & 84.94 \% & 66.92 \% & 0.01 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
CZY\_PPF\_Net & & 73.64 \% & 85.39 \% & 66.01 \% & 0.1 s / 1 core & \\
EQ-PVRCNN & & 73.30 \% & 86.25 \% & 65.49 \% & 0.2 s / GPU & Z. Yang, L. Jiang, Y. Sun, B. Schiele and J. Jia: A Unified Query-based Paradigm for Point Cloud Understanding. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2022.\\
U\_PV\_V2\_ep100\_80 & & 73.17 \% & 86.95 \% & 66.01 \% & 0... s / 1 core & \\
OGMMDet & & 72.92 \% & 86.07 \% & 65.95 \% & 0.01 s / 1 core & \\
ANM & & 72.92 \% & 86.07 \% & 65.95 \% & ANM / & \\
OFFNet & & 72.74 \% & 83.33 \% & 67.53 \% & 0.1 s / GPU & \\
DSA-PV-RCNN & la & 72.61 \% & 83.93 \% & 65.82 \% & 0.08 s / 1 core & P. Bhattacharyya, C. Huang and K. Czarnecki: SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection. 2021.\\
CAT-Det & & 72.51 \% & 85.35 \% & 65.55 \% & 0.3 s / GPU & Y. Zhang, J. Chen and D. Huang: CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection. CVPR 2022.\\
HA-PillarNet & & 72.50 \% & 86.25 \% & 65.38 \% & 0.05 s / 1 core & \\
KPTr & & 72.24 \% & 83.83 \% & 63.94 \% & 0.07 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
PA-Det3D & & 71.93 \% & 84.41 \% & 65.36 \% & 0.06 s / 1 core & \\
GF-pointnet & & 71.90 \% & 84.28 \% & 63.75 \% & 0.02 s / 1 core & \\
BtcDet & la & 71.76 \% & 84.48 \% & 64.70 \% & 0.09 s / GPU & Q. Xu, Y. Zhong and U. Neumann: Behind the Curtain: Learning Occluded Shapes for 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence 2022.\\
ACFNet & & 71.68 \% & 85.76 \% & 65.33 \% & 0.11 s / 1 core & Y. Tian, X. Zhang, X. Wang, J. Xu, J. Wang, R. Ai, W. Gu and W. Ding: ACF-Net: Asymmetric Cascade Fusion for 3D Detection With LiDAR Point Clouds and Images. IEEE Transactions on Intelligent Vehicles 2023.\\
RagNet3D & & 71.64 \% & 85.10 \% & 65.02 \% & 0.05 s / 1 core & \\
Anonymous & & 71.61 \% & 86.04 \% & 63.31 \% & 0.04 s / 1 core & \\
focalnet & & 71.57 \% & 82.10 \% & 65.37 \% & 0.05 s / 1 core & \\
PointPainting & la & 71.54 \% & 83.91 \% & 62.97 \% & 0.4 s / GPU & S. Vora, A. Lang, B. Helou and O. Beijbom: PointPainting: Sequential Fusion for 3D Object Detection. CVPR 2020.\\
PASS-PV-RCNN-Plus & & 71.51 \% & 83.03 \% & 63.85 \% & 1 s / 1 core & Anonymous: Leveraging Anchor-based LiDAR 3D Object Detection via Point Assisted Sample Selection. will submit to computer vision conference/journal 2024.\\
PV-RCNN-Plus & & 71.51 \% & 83.83 \% & 64.77 \% & 1 s / 1 core & \\
RangeIoUDet & la & 71.49 \% & 85.99 \% & 63.62 \% & 0.02 s / GPU & Z. Liang, Z. Zhang, M. Zhang, X. Zhao and S. Pu: RangeIoUDet: Range Image Based Real-Time 3D Object Detector Optimized by Intersection Over Union. CVPR 2021.\\
ACDet & & 71.48 \% & 87.76 \% & 64.69 \% & 0.05 s / 1 core & J. Xu, G. Wang, X. Zhang and G. Wan: ACDet: Attentive Cross-view Fusion for LiDAR-based 3D Object Detection. 3DV 2022.\\
IA-SSD (single) & & 71.44 \% & 85.91 \% & 63.41 \% & 0.013 s / 1 core & Y. Zhang, Q. Hu, G. Xu, Y. Ma, J. Wan and Y. Guo: Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds. CVPR 2022.\\
U\_PV\_V2\_ep\_100\_100 & & 71.35 \% & 84.08 \% & 63.95 \% & 0.1 s / 1 core & \\
PDV & & 71.31 \% & 85.54 \% & 64.40 \% & 0.1 s / 1 core & J. Hu, T. Kuai and S. Waslander: Point Density-Aware Voxels for LiDAR 3D Object Detection. CVPR 2022.\\
3ONet & & 71.29 \% & 85.17 \% & 62.99 \% & 0.1 s / 1 core & H. Hoang and M. Yoo: 3ONet: 3-D Detector for Occluded Object Under Obstructed Conditions. IEEE Sensors Journal 2023.\\
DFAF3D & & 71.27 \% & 85.75 \% & 64.25 \% & 0.05 s / 1 core & Q. Tang, X. Bai, J. Guo, B. Pan and W. Jiang: DFAF3D: A dual-feature-aware anchor-free single-stage 3D detector for point clouds. Image and Vision Computing 2023.\\
BPG3D & & 71.24 \% & 85.28 \% & 63.42 \% & 0.05 s / 1 core & \\
focalnet & & 71.24 \% & 81.78 \% & 65.37 \% & 0.05 s / 1 core & \\
HVNet & & 71.17 \% & 83.97 \% & 63.65 \% & 0.03 s / GPU & M. Ye, S. Xu and T. Cao: HVNet: Hybrid Voxel Network for LiDAR Based 3D Object Detection. CVPR 2020.\\
DiffCandiDet & & 71.11 \% & 85.33 \% & 64.52 \% & 0.06 s / GPU & \\
RAFDet & & 70.99 \% & 84.92 \% & 62.93 \% & 0.01 s / 1 core & \\
M3DeTR & & 70.89 \% & 85.03 \% & 63.14 \% & n/a s / GPU & T. Guan, J. Wang, S. Lan, R. Chandra, Z. Wu, L. Davis and D. Manocha: M3DeTR: Multi-representation, Multi- scale, Mutual-relation 3D Object Detection with Transformers. 2021.\\
PR-SSD & & 70.88 \% & 83.44 \% & 63.43 \% & 0.02 s / GPU & \\
HAF-PVP\_test & & 70.66 \% & 83.99 \% & 62.42 \% & 0.09 s / 1 core & \\
PG-RCNN & & 70.65 \% & 84.94 \% & 64.03 \% & 0.06 s / GPU & I. Koo, I. Lee, S. Kim, H. Kim, W. Jeon and C. Kim: PG-RCNN: Semantic Surface Point Generation for 3D Object Detection. 2023.\\
AAMVFNet & & 70.52 \% & 84.47 \% & 63.85 \% & 0.04 s / GPU & \\
LGNet-3classes & & 70.44 \% & 81.32 \% & 62.95 \% & 0.11 s / 1 core & \\
AMVFNet & & 70.44 \% & 83.98 \% & 63.87 \% & 0.04 s / GPU & \\
SPG\_mini & la & 70.09 \% & 82.66 \% & 63.61 \% & 0.09 s / GPU & Q. Xu, Y. Zhou, W. Wang, C. Qi and D. Anguelov: SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation. Proceedings of the IEEE conference on computer vision and pattern recognition (ICCV) 2021.\\
SDGUFusion & & 70.05 \% & 81.15 \% & 63.98 \% & 0.5 s / 1 core & \\
RAFDet & & 69.81 \% & 82.41 \% & 62.17 \% & 0.01 s / 1 core & \\
RAFDet & & 69.79 \% & 81.93 \% & 62.63 \% & 0.1 s / 1 core & \\
GeVo & & 69.56 \% & 83.03 \% & 62.74 \% & 0.05 s / 1 core & \\
MFB3D & & 69.52 \% & 83.15 \% & 63.38 \% & 0.14 s / 1 core & \\
GraphAlign(ICCV2023) & & 69.43 \% & 80.71 \% & 63.57 \% & 0.03 s / GPU & Z. Song, H. Wei, L. Bai, L. Yang and C. Jia: GraphAlign: Enhancing accurate feature alignment by graph matching for multi-modal 3D object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision 2023.\\
LGSLNet & & 69.11 \% & 81.67 \% & 64.15 \% & 0.1 s / GPU & \\
u\_second\_v4\_epoch\_10 & & 69.10 \% & 84.24 \% & 62.39 \% & 0.1 s / 1 core & \\
FIRM-Net & & 69.09 \% & 82.99 \% & 62.48 \% & 0.07 s / 1 core & \\
MMLab PV-RCNN & la & 68.89 \% & 82.49 \% & 62.41 \% & 0.08 s / 1 core & S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang and H. Li: PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. CVPR 2020.\\
F-ConvNet & la & 68.88 \% & 84.16 \% & 60.05 \% & 0.47 s / GPU & Z. Wang and K. Jia: Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection. IROS 2019.\\
IIOU & & 68.82 \% & 83.89 \% & 60.14 \% & 0.1 s / GPU & \\
SCNet3D & & 68.77 \% & 84.49 \% & 62.12 \% & 0.08 s / 1 core & \\
bs & & 68.73 \% & 82.32 \% & 62.18 \% & 0.1 s / 1 core & \\
MMLab-PartA^2 & la & 68.73 \% & 83.43 \% & 61.85 \% & 0.08 s / GPU & S. Shi, Z. Wang, J. Shi, X. Wang and H. Li: From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020.\\
U\_second\_v4\_ep\_100\_8 & & 68.62 \% & 82.37 \% & 61.11 \% & 0.1 s / 1 core & \\
HotSpotNet & & 68.51 \% & 83.29 \% & 61.84 \% & 0.04 s / 1 core & Q. Chen, L. Sun, Z. Wang, K. Jia and A. Yuille: object as hotspots. Proceedings of the European Conference on Computer Vision (ECCV) 2020.\\
MLFusion-VS & & 68.43 \% & 80.99 \% & 62.46 \% & 0.06 s / 1 core & \\
CG-SSD & & 68.24 \% & 79.80 \% & 61.05 \% & 0.01 s / 1 core & \\
P2V-RCNN & & 68.06 \% & 81.09 \% & 60.73 \% & 0.1 s / 2 cores & J. Li, S. Luo, Z. Zhu, H. Dai, A. Krylov, Y. Ding and L. Shao: P2V-RCNN: Point to Voxel Feature Learning for 3D Object Detection from Point Clouds. IEEE Access 2021.\\
SFA-GCL(80) & & 68.06 \% & 84.65 \% & 61.18 \% & 0.04 s / 1 core & \\
H^23D R-CNN & & 67.90 \% & 82.76 \% & 60.49 \% & 0.03 s / 1 core & J. Deng, W. Zhou, Y. Zhang and H. Li: From Multi-View to Hollow-3D: Hallucinated Hollow-3D R-CNN for 3D Object Detection. IEEE Transactions on Circuits and Systems for Video Technology 2021.\\
SFA-GCL & & 67.72 \% & 84.16 \% & 60.89 \% & 0.04 s / 1 core & \\
focal & & 67.67 \% & 80.82 \% & 61.88 \% & 100 s / 1 core & \\
VPFNet & & 67.66 \% & 80.83 \% & 61.36 \% & 0.2 s / 1 core & C. Wang, H. Chen and L. Fu: VPFNet: Voxel-Pixel Fusion Network for Multi-class 3D Object Detection. 2021.C. Wang, H. Chen, Y. Chen, P. Hsiao and L. Fu: VoPiFNet: Voxel-Pixel Fusion Network for Multi-Class 3D Object Detection. IEEE Transactions on Intelligent Transportation Systems 2024.\\
3DSSD & & 67.62 \% & 85.04 \% & 61.14 \% & 0.04 s / GPU & Z. Yang, Y. Sun, S. Liu and J. Jia: 3DSSD: Point-based 3D Single Stage Object Detector. CVPR 2020.\\
Fast-CLOCs & & 67.55 \% & 83.34 \% & 59.61 \% & 0.1 s / GPU & S. Pang, D. Morris and H. Radha: Fast-CLOCs: Fast Camera-LiDAR Object Candidates Fusion for 3D Object Detection. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022.\\
SFA-GCL(80, k=4) & & 67.46 \% & 84.31 \% & 58.87 \% & 0.04 s / 1 core & \\
XT-PartA2 & & 67.40 \% & 81.41 \% & 61.92 \% & 0.1 s / GPU & \\
DVFENet & & 67.40 \% & 82.29 \% & 60.71 \% & 0.05 s / 1 core & Y. He, G. Xia, Y. Luo, L. Su, Z. Zhang, W. Li and P. Wang: DVFENet: Dual-branch Voxel Feature Extraction Network for 3D Object Detection. Neurocomputing 2021.\\
FromVoxelToPoint & & 67.36 \% & 82.68 \% & 59.15 \% & 0.1 s / 1 core & J. Li, H. Dai, L. Shao and Y. Ding: From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with Voxel-to- Point Decoder. MM '21: The 29th ACM International Conference on Multimedia (ACM MM) 2021.\\
Point-GNN & la & 67.28 \% & 81.17 \% & 59.67 \% & 0.6 s / GPU & W. Shi and R. Rajkumar: Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud. CVPR 2020.\\
HINTED & & 67.27 \% & 81.53 \% & 60.88 \% & 0.04 s / 1 core & \\
MMLab-PointRCNN & la & 67.24 \% & 82.56 \% & 60.28 \% & 0.1 s / GPU & S. Shi, X. Wang and H. Li: Pointrcnn: 3d object proposal generation and detection from point cloud. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.\\
STD & & 67.23 \% & 81.36 \% & 59.35 \% & 0.08 s / GPU & Z. Yang, Y. Sun, S. Liu, X. Shen and J. Jia: STD: Sparse-to-Dense 3D Object Detector for Point Cloud. ICCV 2019.\\
mm3d\_PartA2 & & 67.12 \% & 82.10 \% & 60.42 \% & 0.1 s / GPU & \\
BAPartA2S-4h & & 67.05 \% & 82.22 \% & 61.08 \% & 0.1 s / 1 core & \\
SVGA-Net & & 66.82 \% & 81.25 \% & 59.37 \% & 0.03s / 1 core & Q. He, Z. Wang, H. Zeng, Y. Zeng and Y. Liu: SVGA-Net: Sparse Voxel-Graph Attention Network for 3D Object Detection from Point Clouds. AAAI 2022.\\
S-AT GCN & & 66.71 \% & 78.53 \% & 60.19 \% & 0.02 s / GPU & L. Wang, C. Wang, X. Zhang, T. Lan and J. Li: S-AT GCN: Spatial-Attention Graph Convolution Network based Feature Enhancement for 3D Object Detection. CoRR 2021.\\
TF-PartA2 & & 66.67 \% & 82.42 \% & 60.64 \% & 0.1 s / 1 core & \\
casxv1 & & 66.50 \% & 81.01 \% & 60.09 \% & 0.01 s / 1 core & \\
ARPNET & & 66.39 \% & 82.32 \% & 58.80 \% & 0.08 s / GPU & Y. Ye, C. Zhang and X. Hao: ARPNET: attention region proposal network for 3D object detection. Science China Information Sciences 2019.\\
IA-SSD (multi) & & 66.29 \% & 81.30 \% & 59.58 \% & 0.014 s / 1 core & Y. Zhang, Q. Hu, G. Xu, Y. Ma, J. Wan and Y. Guo: Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds. CVPR 2022.\\
MGAF-3DSSD & & 66.00 \% & 83.03 \% & 57.57 \% & 0.1 s / 1 core & J. Li, H. Dai, L. Shao and Y. Ding: Anchor-free 3D Single Stage Detector with Mask-Guided Attention for Point Cloud. MM '21: The 29th ACM International Conference on Multimedia (ACM MM) 2021.\\
IOUFusion & & 65.88 \% & 82.98 \% & 59.11 \% & 0.1 s / GPU & \\
AB3DMOT & la on & 65.85 \% & 80.00 \% & 58.69 \% & 0.0047s / 1 core & X. Weng and K. Kitani: A Baseline for 3D Multi-Object Tracking. arXiv:1907.03961 2019.\\
EOTL & & 65.76 \% & 81.44 \% & 56.47 \% & TBD s / 1 core & R. Yang, Z. Yan, T. Yang, Y. Wang and Y. Ruichek: Efficient Online Transfer Learning for Road Participants Detection in Autonomous Driving. IEEE Sensors Journal 2023.\\
PI-SECOND & & 65.62 \% & 81.99 \% & 59.19 \% & 0.05 s / GPU & \\
MG & & 65.43 \% & 81.05 \% & 59.11 \% & 0.1 s / 1 core & \\
SC-SSD & & 65.36 \% & 79.14 \% & 58.50 \% & 1 s / 1 core & \\
SFA-GCL & & 65.22 \% & 82.10 \% & 56.54 \% & 0.04 s / 1 core & \\
af & & 65.12 \% & 78.85 \% & 59.17 \% & 1 s / GPU & \\
DGEnhCL & & 65.07 \% & 81.38 \% & 58.13 \% & 0.04 s / 1 core & \\
centerpoint\_pcdet & & 64.99 \% & 79.83 \% & 58.43 \% & 0.06 s / 1 core & \\
Test\_dif & & 64.80 \% & 80.24 \% & 58.49 \% & 0.01 s / 1 core & \\
voxelnext\_pcdet & & 64.66 \% & 81.10 \% & 57.53 \% & 0.05 s / 1 core & \\
GSG-FPS & & 64.65 \% & 78.65 \% & 58.47 \% & 0.01 s / 1 core & \\
Faraway-Frustum & la & 64.54 \% & 79.65 \% & 57.84 \% & 0.1 s / GPU & H. Zhang, D. Yang, E. Yurtsever, K. Redmill and U. Ozguner: Faraway-frustum: Dealing with lidar sparsity for 3D object detection using fusion. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC) 2021.\\
SRDL & & 64.52 \% & 79.64 \% & 57.90 \% & 0.05 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
VoxelFSD-S & & 64.26 \% & 80.07 \% & 57.17 \% & 0.05 s / 1 core & \\
SIF & & 64.13 \% & 79.32 \% & 57.38 \% & 0.1 s / 1 core & P. An: SIF. Submitted to CVIU 2021.\\
prcnn\_v18\_80\_100 & & 63.87 \% & 80.66 \% & 57.25 \% & 0.1 s / 1 core & \\
TANet & & 63.77 \% & 79.16 \% & 56.21 \% & 0.035s / GPU & Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou and X. Bai: TANet: Robust 3D Object Detection from Point Clouds with Triple Attention. AAAI 2020.\\
SFA-GCL\_dataaug & & 63.35 \% & 81.93 \% & 56.47 \% & 0.04 s / 1 core & \\
ROT\_S3D & & 63.26 \% & 79.60 \% & 56.95 \% & 0.1 s / GPU & \\
SFA-GCL(baseline) & & 63.24 \% & 81.50 \% & 56.42 \% & 0.04 s / 1 core & \\
XView & & 63.06 \% & 81.32 \% & 56.65 \% & 0.1 s / 1 core & L. Xie, G. Xu, D. Cai and X. He: X-view: Non-egocentric Multi-View 3D Object Detector. 2021.\\
EPNet++ & & 62.94 \% & 78.57 \% & 56.62 \% & 0.1 s / GPU & Z. Liu, T. Huang, B. Li, X. Chen, X. Wang and X. Bai: EPNet++: Cascade Bi-Directional Fusion for Multi-Modal 3D Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 2022.\\
PointPillars & la & 62.73 \% & 79.90 \% & 55.58 \% & 16 ms / & A. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom: PointPillars: Fast Encoders for Object Detection from Point Clouds. CVPR 2019.\\
MM\_SECOND & & 62.61 \% & 77.98 \% & 55.67 \% & 0.05 s / GPU & \\
L-AUG & & 62.56 \% & 75.41 \% & 56.86 \% & 0.1 s / 1 core & T. Cortinhal, I. Gouigah and E. Aksoy: Semantics-aware LiDAR-Only Pseudo Point Cloud Generation for 3D Object Detection. 2023.\\
MMpp & & 61.70 \% & 75.76 \% & 54.75 \% & 0.05 s / 1 core & \\
IIOU\_LDR & & 61.70 \% & 77.26 \% & 55.43 \% & 0.03 s / 1 core & \\
SeSame-point & & 61.70 \% & 75.73 \% & 55.27 \% & N/A s / TITAN RTX & \\
LVFSD & & 61.68 \% & 79.03 \% & 55.02 \% & 0.06 s / & ERROR: Wrong syntax in BIBTEX file.\\
F-PointNet & la & 61.37 \% & 77.26 \% & 53.78 \% & 0.17 s / GPU & C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.\\
MMpointpillars & & 61.06 \% & 74.55 \% & 55.02 \% & 0.05 s / 1 core & \\
P2P & & 61.03 \% & 75.03 \% & 55.05 \% & 0.1 s / GPU & \\
VSAC & & 60.23 \% & 78.55 \% & 53.91 \% & 0.07 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
SeSame-pillar & & 60.21 \% & 72.22 \% & 53.67 \% & N/A s / TITAN RTX & \\
epBRM & la & 59.79 \% & 75.13 \% & 53.36 \% & 0.10 s / 1 core & K. Shin: Improving a Quality of 3D Object Detection by Spatial Transformation Mechanism. arXiv preprint arXiv:1910.04853 2019.\\
BirdNet+ & la & 59.58 \% & 70.84 \% & 54.20 \% & 0.11 s / & A. Barrera, J. Beltrán, C. Guindel, J. Iglesias and F. García: BirdNet+: Two-Stage 3D Object Detection in LiDAR through a Sparsity-Invariant Bird’s Eye View. IEEE Access 2021.\\
SeSame-voxel & & 59.36 \% & 76.95 \% & 53.14 \% & N/A s / TITAN RTX & \\
SFEBEV & & 58.28 \% & 73.10 \% & 52.31 \% & 0.01 s / 1 core & \\
DMF & st & 57.99 \% & 71.92 \% & 51.55 \% & 0.2 s / 1 core & X. J. Chen and W. Xu: Disparity-Based Multiscale Fusion Network for Transportation Detection. IEEE Transactions on Intelligent Transportation Systems 2022.\\
PUDet & & 57.77 \% & 72.93 \% & 51.03 \% & 0.3 s / GPU & \\
PointRGBNet & & 57.59 \% & 73.09 \% & 51.78 \% & 0.08 s / 4 cores & P. Xie Desheng: Real-time Detection of 3D Objects Based on Multi-Sensor Information Fusion. Automotive Engineering 2022.\\
HA PillarNet & & 57.56 \% & 71.10 \% & 50.67 \% & 0.05 s / 1 core & \\
AVOD-FPN & la & 57.12 \% & 69.39 \% & 51.09 \% & 0.1 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
PiFeNet & & 56.94 \% & 72.80 \% & 50.04 \% & 0.03 s / 1 core & D. Le, H. Shi, H. Rezatofighi and J. Cai: Accurate and Real-time 3D Pedestrian Detection Using an Efficient Attentive Pillar Network. IEEE Robotics and Automation Letters 2022.\\
SCNet & la & 56.39 \% & 73.73 \% & 49.99 \% & 0.04 s / GPU & Z. Wang, H. Fu, L. Wang, L. Xiao and B. Dai: SCNet: Subdivision Coding Network for Object Detection Based on 3D Point Cloud. IEEE Access 2019.\\
PFF3D & la & 55.71 \% & 72.67 \% & 49.58 \% & 0.05 s / GPU & L. Wen and K. Jo: Fast and Accurate 3D Object Detection for Lidar-Camera-Based Autonomous Vehicles Using One Shared Voxel-Based Backbone. IEEE Access 2021.\\
DFSemONet(Baseline) & & 55.20 \% & 73.44 \% & 49.29 \% & 0.04 s / GPU & \\
MLOD & la & 55.06 \% & 73.03 \% & 48.21 \% & 0.12 s / GPU & J. Deng and K. Czarnecki: MLOD: A multi-view 3D object detection based on robust feature fusion method. arXiv preprint arXiv:1909.04163 2019.\\
BirdNet+ (legacy) & la & 52.15 \% & 72.45 \% & 46.57 \% & 0.1 s / & A. Barrera, C. Guindel, J. Beltrán and F. García: BirdNet+: End-to-End 3D Object Detection in LiDAR Bird’s Eye View. 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020.\\
DSGN++ & st & 49.37 \% & 68.29 \% & 43.79 \% & 0.2 s / & Y. Chen, S. Huang, S. Liu, B. Yu and J. Jia: DSGN++: Exploiting Visual-Spatial Relation for Stereo-Based 3D Detectors. IEEE Transactions on Pattern Analysis and Machine Intelligence 2022.\\
StereoDistill & & 48.37 \% & 69.46 \% & 42.69 \% & 0.4 s / 1 core & Z. Liu, X. Ye, X. Tan, D. Errui, Y. Zhou and X. Bai: StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence 2023.\\
AVOD & la & 48.15 \% & 64.11 \% & 42.37 \% & 0.08 s / & J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.\\
SeSame-voxel w/score & & 45.61 \% & 58.94 \% & 40.68 \% & N/A s / GPU & \\
BirdNet & la & 41.56 \% & 58.64 \% & 36.94 \% & 0.11 s / & J. Beltrán, C. Guindel, F. Moreno, D. Cruzado, F. García and A. Escalera: BirdNet: A 3D Object Detection Framework from LiDAR Information. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
SparsePool & & 40.74 \% & 56.52 \% & 36.68 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
MMLAB LIGA-Stereo & st & 40.60 \% & 58.95 \% & 35.27 \% & 0.4 s / 1 core & X. Guo, S. Shi, X. Wang and H. Li: LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based 3D Detector. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021.\\
TopNet-Retina & la & 36.83 \% & 47.48 \% & 33.58 \% & 52ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
CG-Stereo & st & 36.25 \% & 55.33 \% & 32.17 \% & 0.57 s / & C. Li, J. Ku and S. Waslander: Confidence Guided Stereo 3D Object Detection with Split Depth Estimation. IROS 2020.\\
SparsePool & & 35.24 \% & 43.55 \% & 30.15 \% & 0.13 s / 8 cores & Z. Wang, W. Zhan and M. Tomizuka: Fusing bird view lidar point cloud and front view camera image for deep object detection. arXiv preprint arXiv:1711.06703 2017.\\
Disp R-CNN (velo) & st & 27.04 \% & 44.19 \% & 23.58 \% & 0.387 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
Disp R-CNN & st & 27.04 \% & 44.19 \% & 23.58 \% & 0.387 s / GPU & J. Sun, L. Chen, Y. Xie, S. Zhang, Q. Jiang, X. Zhou and H. Bao: Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation. CVPR 2020.\\
Complexer-YOLO & la & 25.43 \% & 32.00 \% & 22.88 \% & 0.06 s / GPU & M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.\\
DSGN & st & 21.04 \% & 31.23 \% & 18.93 \% & 0.67 s / & Y. Chen, S. Liu, X. Shen and J. Jia: DSGN: Deep Stereo Geometry Network for 3D Object Detection. CVPR 2020.\\
SeSame-pillar w/scor & & 19.53 \% & 15.92 \% & 17.61 \% & N/A s / 1 core & \\
OC Stereo & st & 19.23 \% & 32.47 \% & 17.11 \% & 0.35 s / 1 core & A. Pon, J. Ku, C. Li and S. Waslander: Object-Centric Stereo Matching for 3D Object Detection. ICRA 2020.\\
TopNet-DecayRate & la & 16.00 \% & 23.02 \% & 13.24 \% & 92 ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
SST [st] & st & 15.20 \% & 26.40 \% & 13.47 \% & 1 s / 1 core & \\
RT3D-GMP & st & 13.92 \% & 20.59 \% & 12.74 \% & 0.06 s / GPU & H. Königshof and C. Stiller: Learning-Based Shape Estimation with Grid Map Patches for Realtime 3D Object Detection for Automated Driving. 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020.\\
MonoTAKD V2 & & 11.66 \% & 19.68 \% & 10.33 \% & 0.1 s / 1 core & \\
MonoTAKD & & 11.17 \% & 17.98 \% & 9.99 \% & 0.1 s / 1 core & \\
MonoLTKD\_V3 & & 9.42 \% & 16.90 \% & 8.29 \% & 0.04 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
TopNet-UncEst & la & 9.18 \% & 12.31 \% & 8.14 \% & 0.09 s / & S. Wirges, M. Braun, M. Lauer and C. Stiller: Capturing Object Detection Uncertainty in Multi-Layer Grid Maps. 2019.\\
ESGN & st & 9.02 \% & 15.78 \% & 7.96 \% & 0.06 s / GPU & A. Gao, Y. Pang, J. Nie, Z. Shao, J. Cao, Y. Guo and X. Li: ESGN: Efficient Stereo Geometry Network for Fast 3D Object Detection. IEEE Transactions on Circuits and Systems for Video Technology 2022.\\
SeSame-point w/score & & 8.90 \% & 10.65 \% & 7.68 \% & N/A s / GPU & \\
MonoLTKD & & 8.25 \% & 13.73 \% & 7.01 \% & 0.04 s / 1 core & \\
CMKD & & 8.15 \% & 14.66 \% & 7.23 \% & 0.1 s / 1 core & Y. Hong, H. Dai and Y. Ding: Cross-Modality Knowledge Distillation Network for Monocular 3D Object Detection. ECCV 2022.\\
MonoGhost\_Ped\_Cycl & & 8.11 \% & 12.23 \% & 6.75 \% & 0.03 s / 1 core & \\
PS-fld & & 7.29 \% & 12.80 \% & 6.05 \% & 0.25 s / 1 core & Y. Chen, H. Dai and Y. Ding: Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022.\\
error & & 6.57 \% & 11.33 \% & 5.94 \% & 1 s / 1 core & \\
MonoLiG & & 6.49 \% & 9.48 \% & 5.46 \% & 0.03 s / 1 core & A. Hekimoglu, M. Schmidt and A. Ramiro: Monocular 3D Object Detection with LiDAR Guided Semi Supervised Active Learning. 2023.\\
TopNet-HighRes & la & 6.48 \% & 9.99 \% & 6.76 \% & 101ms / & S. Wirges, T. Fischer, C. Stiller and J. Frias: Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018.\\
DA3D+KM3D+v2-99 & & 5.82 \% & 9.73 \% & 4.88 \% & 0.120s / GPU & Y. Jia, J. Wang, H. Pan and W. Sun: Enhancing Monocular 3-D Object Detection Through Data Augmentation Strategies. IEEE Transactions on Instrumentation and Measurement 2024.\\
MonoPSR & & 5.78 \% & 9.87 \% & 4.57 \% & 0.2 s / GPU & J. Ku*, A. Pon* and S. Waslander: Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction. CVPR 2019.\\
DD3D & & 5.69 \% & 9.20 \% & 5.20 \% & n/a s / 1 core & D. Park, R. Ambrus, V. Guizilini, J. Li and A. Gaidon: Is Pseudo-Lidar needed for Monocular 3D Object detection?. IEEE/CVF International Conference on Computer Vision (ICCV) .\\
MonoSIM\_v2 & & 5.61 \% & 9.09 \% & 4.77 \% & 0.03 s / 1 core & \\
MonoLSS & & 5.52 \% & 8.88 \% & 4.98 \% & 0.04 s / 1 core & Z. Li, J. Jia and Y. Shi: MonoLSS: Learnable Sample Selection For Monocular 3D Detection. International Conference on 3D Vision 2024.\\
CaDDN & & 5.38 \% & 9.67 \% & 4.75 \% & 0.63 s / GPU & C. Reading, A. Harakeh, J. Chae and S. Waslander: Categorical Depth Distribution Network for Monocular 3D Object Detection. CVPR 2021.\\
Mix-Teaching & & 5.36 \% & 8.56 \% & 4.62 \% & 30 s / 1 core & L. Yang, X. Zhang, L. Wang, M. Zhu, C. Zhang and J. Li: Mix-Teaching: A Simple, Unified and Effective Semi-Supervised Learning Framework for Monocular 3D Object Detection. ArXiv 2022.\\
PS-SVDM & & 5.34 \% & 9.20 \% & 4.31 \% & 1 s / 1 core & Y. Shi: SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection. arXiv preprint arXiv:2307.02270 2023.\\
Anonymous & & 5.23 \% & 8.88 \% & 4.47 \% & 0.1 s / 1 core & \\
MonoUNI & & 5.03 \% & 8.25 \% & 4.50 \% & 0.04 s / 1 core & J. Jia, Z. Li and Y. Shi: MonoUNI: A Unified Vehicle and Infrastructure-side Monocular 3D Object Detection Network with Sufficient Depth Clues. Thirty-seventh Conference on Neural Information Processing Systems 2023.\\
MonoTRKDv2 & & 5.01 \% & 9.08 \% & 4.21 \% & 40 s / 1 core & \\
mdab & & 4.97 \% & 8.83 \% & 4.74 \% & 22 s / 1 core & \\
LPCG-Monoflex & & 4.90 \% & 8.14 \% & 3.86 \% & 0.03 s / 1 core & L. Peng, F. Liu, Z. Yu, S. Yan, D. Deng, Z. Yang, H. Liu and D. Cai: Lidar Point Cloud Guided Monocular 3D Object Detection. ECCV 2022.\\
Plane-Constraints & & 4.79 \% & 8.67 \% & 3.90 \% & 0.05 s / 4 cores & H. Yao, J. Chen, Z. Wang, X. Wang, X. Chai, Y. Qiu and P. Han: Vertex points are not enough: Monocular 3D object detection via intra-and inter-plane constraints. Neural Networks 2023.\\
MonoFRD & & 4.55 \% & 8.44 \% & 4.14 \% & 0.01 s / 1 core & \\
MonoDDE & & 4.36 \% & 6.68 \% & 3.76 \% & 0.04 s / 1 core & Z. Li, Z. Qu, Y. Zhou, J. Liu, H. Wang and L. Jiang: Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection. CVPR 2022.\\
MonoDTR & & 4.11 \% & 5.84 \% & 3.48 \% & 0.04 s / 1 core & K. Huang, T. Wu, H. Su and W. Hsu: MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer. CVPR 2022.\\
RT3DStereo & st & 4.10 \% & 7.03 \% & 3.88 \% & 0.08 s / GPU & H. Königshof, N. Salscheider and C. Stiller: Realtime 3D Object Detection for Automated Driving Using Stereo Vision and Semantic Information. Proc. IEEE Intl. Conf. Intelligent Transportation Systems 2019.\\
HomoLoss(monoflex) & & 4.09 \% & 6.81 \% & 3.78 \% & 0.04 s / 1 core & J. Gu, B. Wu, L. Fan, J. Huang, S. Cao, Z. Xiang and X. Hua: Homography Loss for Monocular 3D Object Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022.\\
DFR-Net & & 4.00 \% & 5.99 \% & 3.95 \% & 0.18 s / & Z. Zou, X. Ye, L. Du, X. Cheng, X. Tan, L. Zhang, J. Feng, X. Xue and E. Ding: The devil is in the task: Exploiting reciprocal appearance-localization features for monocular 3d object detection . ICCV 2021.\\
DEVIANT & & 3.97 \% & 6.42 \% & 3.51 \% & 0.04 s / & A. Kumar, G. Brazil, E. Corona, A. Parchami and X. Liu: DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection. European Conference on Computer Vision (ECCV) 2022.\\
GUPNet & & 3.85 \% & 6.94 \% & 3.64 \% & NA s / 1 core & Y. Lu, X. Ma, L. Yang, T. Zhang, Y. Liu, Q. Chu, J. Yan and W. Ouyang: Geometry Uncertainty Projection Network for Monocular 3D Object Detection. arXiv preprint arXiv:2107.13774 2021.\\
OPA-3D & & 3.75 \% & 6.01 \% & 3.56 \% & 0.04 s / 1 core & Y. Su, Y. Di, G. Zhai, F. Manhardt, J. Rambach, B. Busam, D. Stricker and F. Tombari: OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object Detection. IEEE Robotics and Automation Letters 2023.\\
CIE & & 3.74 \% & 6.13 \% & 3.18 \% & 0.1 s / 1 core & Anonymities: Consistency of Implicit and Explicit Features Matters for Monocular 3D Object Detection. arXiv preprint arXiv:2207.07933 2022.\\
PS-SVDM & & 3.64 \% & 6.84 \% & 3.04 \% & 1 s / 1 core & Y. Shi: SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection. arXiv preprint arXiv:2307.02270 2023.\\
SGM3D & & 3.63 \% & 7.05 \% & 3.33 \% & 0.03 s / 1 core & Z. Zhou, L. Du, X. Ye, Z. Zou, X. Tan, L. Zhang, X. Xue and J. Feng: SGM3D: Stereo Guided Monocular 3D Object Detection. RA-L 2022.\\
SH3D & & 3.60 \% & 6.73 \% & 3.30 \% & 0.1 s / 1 core & \\
mdab & & 3.38 \% & 6.94 \% & 3.21 \% & 0.02 s / 1 core & \\
Cube R-CNN & & 3.35 \% & 5.01 \% & 3.23 \% & 0.05 s / GPU & G. Brazil, A. Kumar, J. Straub, N. Ravi, J. Johnson and G. Gkioxari: Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild. CVPR 2023.\\
Aug3D-RPN & & 3.33 \% & 5.44 \% & 2.82 \% & 0.08 s / 1 core & C. He, J. Huang, X. Hua and L. Zhang: Aug3D-RPN: Improving Monocular 3D Object Detection by Synthetic Images with Virtual Depth. 2021.\\
MonOAPC & & 3.31 \% & 6.54 \% & 3.05 \% & 0035 s / 1 core & H. Yao, J. Chen, Z. Wang, X. Wang, P. Han, X. Chai and Y. Qiu: Occlusion-Aware Plane-Constraints for Monocular 3D Object Detection. IEEE Transactions on Intelligent Transportation Systems 2023.\\
monodle & & 3.28 \% & 5.34 \% & 2.83 \% & 0.04 s / GPU & X. Ma, Y. Zhang, D. Xu, D. Zhou, S. Yi, H. Li and W. Ouyang: Delving into Localization Errors for Monocular 3D Object Detection. CVPR 2021 .\\
MDSNet & & 3.22 \% & 5.99 \% & 2.62 \% & 0.05 s / 1 core & Z. Xie, Y. Song, J. Wu, Z. Li, C. Song and Z. Xu: MDS-Net: Multi-Scale Depth Stratification 3D Object Detection from Monocular Images. Sensors 2022.\\
DDMP-3D & & 3.14 \% & 4.92 \% & 2.44 \% & 0.18 s / 1 core & L. Wang, L. Du, X. Ye, Y. Fu, G. Guo, X. Xue, J. Feng and L. Zhang: Depth-conditioned Dynamic Message Propagation for Monocular 3D Object Detection. CVPR 2020.\\
MonoSIM & & 3.05 \% & 5.40 \% & 2.60 \% & 0.16 s / 1 core & \\
QD-3DT & on & 3.02 \% & 5.71 \% & 2.73 \% & 0.03 s / GPU & H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.\\
MonoPair & & 2.87 \% & 4.76 \% & 2.42 \% & 0.06 s / GPU & Y. Chen, L. Tai, K. Sun and M. Li: MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.\\
MonoNeRD & & 2.80 \% & 5.24 \% & 2.55 \% & na s / 1 core & J. Xu, L. Peng, H. Cheng, H. Li, W. Qian, K. Li, W. Wang and D. Cai: MonoNeRD: NeRF-like Representations for Monocular 3D Object Detection. ICCV 2023.\\
MonoFlex & & 2.67 \% & 4.41 \% & 2.50 \% & 0.03 s / GPU & Y. Zhang, J. Lu and J. Zhou: Objects are Different: Flexible Monocular 3D Object Detection. CVPR 2021.\\
mdab & & 2.63 \% & 4.95 \% & 2.65 \% & 22 s / 1 core & \\
RefinedMPL & & 2.42 \% & 4.23 \% & 2.14 \% & 0.15 s / GPU & J. Vianney, S. Aich and B. Liu: RefinedMPL: Refined Monocular PseudoLiDAR for 3D Object Detection in Autonomous Driving. arXiv preprint arXiv:1911.09712 2019.\\
MonoRCNN++ & & 2.31 \% & 3.50 \% & 2.01 \% & 0.07 s / GPU & X. Shi, Z. Chen and T. Kim: Multivariate Probabilistic Monocular 3D Object Detection. WACV 2023.\\
mdab & & 2.00 \% & 4.00 \% & 2.00 \% & 0.02 s / 1 core & \\
SS3D & & 1.89 \% & 3.45 \% & 1.44 \% & 48 ms / & E. Jörgensen, C. Zach and F. Kahl: Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss. CoRR 2019.\\
DA3D & & 1.89 \% & 3.46 \% & 1.51 \% & 0.03 s / 1 core & Y. Jia, J. Wang, H. Pan and W. Sun: Enhancing Monocular 3-D Object Detection Through Data Augmentation Strategies. IEEE Transactions on Instrumentation and Measurement 2024.\\
D4LCN & & 1.82 \% & 2.72 \% & 1.79 \% & 0.2 s / GPU & M. Ding, Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu and P. Luo: Learning Depth-Guided Convolutions for Monocular 3D Object Detection. CVPR 2020.\\
PGD-FCOS3D & & 1.79 \% & 3.54 \% & 1.56 \% & 0.03 s / 1 core & T. Wang, X. Zhu, J. Pang and D. Lin: Probabilistic and Geometric Depth: Detecting Objects in Perspective. Conference on Robot Learning (CoRL) 2021.\\
FMF-occlusion-net & & 1.65 \% & 1.91 \% & 1.75 \% & 0.16 s / 1 core & H. Liu, H. Liu, Y. Wang, F. Sun and W. Huang: Fine-grained Multi-level Fusion for Anti- occlusion Monocular 3D Object Detection. IEEE Transactions on Image Processing 2022.\\
MonoAuxNorm & & 1.65 \% & 3.00 \% & 1.37 \% & 0.02 s / GPU & \\
CMAN & & 1.48 \% & 1.76 \% & 1.17 \% & 0.15 s / 1 core & C. Yuanzhouhan Cao: CMAN: Leaning Global Structure Correlation for Monocular 3D Object Detection. IEEE Trans. Intell. Transport. Syst. 2022.\\
DA3D+KM3D & & 1.44 \% & 2.88 \% & 1.37 \% & 0.02 s / GPU & Y. Jia, J. Wang, H. Pan and W. Sun: Enhancing Monocular 3-D Object Detection Through Data Augmentation Strategies. IEEE Transactions on Instrumentation and Measurement 2024.\\
MonoEF & & 1.18 \% & 2.36 \% & 1.15 \% & 0.03 s / 1 core & Y. Zhou, Y. He, H. Zhu, C. Wang, H. Li and Q. Jiang: Monocular 3D Object Detection: An Extrinsic Parameter Free Approach. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021.\\
M3D-RPN & & 0.81 \% & 1.25 \% & 0.78 \% & 0.16 s / GPU & G. Brazil and X. Liu: M3D-RPN: Monocular 3D Region Proposal Network for Object Detection . ICCV 2019 .\\
MonoRUn & & 0.73 \% & 1.14 \% & 0.66 \% & 0.07 s / GPU & H. Chen, Y. Huang, W. Tian, Z. Gao and L. Xiong: MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021.\\
MonoAIU & & 0.72 \% & 0.92 \% & 0.45 \% & 0.03 s / GPU & \\
Shift R-CNN (mono) & & 0.38 \% & 0.76 \% & 0.41 \% & 0.25 s / GPU & A. Naiden, V. Paunescu, G. Kim, B. Jeon and M. Leordeanu: Shift R-CNN: Deep Monocular 3D Object Detection With Closed-form Geometric Constraints. ICIP 2019.\\
f3sd & & 0.01 \% & 0.02 \% & 0.01 \% & 1.67 s / 1 core & \\
mBoW & la & 0.00 \% & 0.00 \% & 0.00 \% & 10 s / 1 core & J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
\end{tabular}