3D Object Detection Evaluation 2017


The 3D object detection benchmark consists of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labeled objects. For evaluation, we compute precision-recall curves. To rank the methods we compute average precision. We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing the label files.

We evaluate 3D object detection performance using the PASCAL criteria also used for 2D object detection. Far objects are thus filtered based on their bounding box height in the image plane. As only objects also appearing on the image plane are labeled, objects in don't car areas do not count as false positives. We note that the evaluation does not take care of ignoring detections that are not visible on the image plane — these detections might give rise to false positives. For cars we require an 3D bounding box overlap of 70%, while for pedestrians and cyclists we require a 3D bounding box overlap of 50%. Difficulties are defined as follows:

  • Easy: Min. bounding box height: 40 Px, Max. occlusion level: Fully visible, Max. truncation: 15 %
  • Moderate: Min. bounding box height: 25 Px, Max. occlusion level: Partly occluded, Max. truncation: 30 %
  • Hard: Min. bounding box height: 25 Px, Max. occlusion level: Difficult to see, Max. truncation: 50 %

All methods are ranked based on the moderately difficult results.

Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Flow: Method uses optical flow (2 temporally adjacent images)
  • Multiview: Method uses more than 2 temporally adjacent images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • Additional training data: Use of additional data sources for training (see details)

Car


Method Setting Code Moderate Easy Hard Runtime Environment
1 F-PointNet
This method makes use of Velodyne laser scans.
70.39 % 81.20 % 62.19 % 0.17 s GPU @ 3.0 Ghz (Python)
2 AVOD
This method makes use of Velodyne laser scans.
65.78 % 73.59 % 58.38 % 0.08 s Titan X (pascal)
3 VxNet(LiDAR)
This method makes use of Velodyne laser scans.
65.11 % 77.47 % 57.73 % 0.23 s GPU @ 2.5 Ghz (Python + C/C++)
4 MV3D
This method makes use of Velodyne laser scans.
62.35 % 71.09 % 55.12 % 0.36 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
5 MV3D (LIDAR)
This method makes use of Velodyne laser scans.
52.73 % 66.77 % 51.31 % 0.24 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
6 F-PC_CNN
This method makes use of Velodyne laser scans.
42.67 % 50.46 % 40.15 % 0.5 s GPU @ 3.0 Ghz (Matlab + C/C++)
7 SDN
This method makes use of Velodyne laser scans.
21.86 % 32.29 % 18.09 % 0.08 s GPU @ 1.5 Ghz (Python)
8 LMNetV2
This method makes use of Velodyne laser scans.
15.24 % 14.75 % 12.85 % 0.02 s GPU @ 2.5 Ghz (C/C++)
9 3dSSD 14.97 % 14.71 % 19.43 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
10 LMnet
This method makes use of Velodyne laser scans.
9.19 % 11.32 % 9.19 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
11 DoBEM 6.95 % 7.42 % 13.45 % 0.6 s GPU @ 2.5 Ghz (Python + C/C++)
S. Yu, T. Westfechtel, R. Hamada, K. Ohno and S. Tadokoro: Vehicle Detection and Localization on Bird's Eye View Elevation Images Using Convolutional Neural Network. IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) 2017.
12 CSoR
This method makes use of Velodyne laser scans.
6.79 % 6.76 % 6.14 % 3.5 s 4 cores @ >3.5 Ghz (Python + C/C++)
L. Plotkin: PyDriver: Entwicklung eines Frameworks für räumliche Detektion und Klassifikation von Objekten in Fahrzeugumgebung. 2015.
13 3D-Mono 4.64 % 6.25 % 4.19 % 0.1 s 1 core @ 2.5 Ghz (Python + C/C++)
14 SPC
This method makes use of Velodyne laser scans.
0.52 % 0.68 % 0.60 % 0.4 s 4 cores @ 2.5 Ghz (Python)
15 LidarNet
This method makes use of Velodyne laser scans.
0.02 % 0.01 % 0.03 % 0.007 s GPU @ 2.5 Ghz (C/C++)
16 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Pedestrian


Method Setting Code Moderate Easy Hard Runtime Environment
1 F-PointNet
This method makes use of Velodyne laser scans.
44.89 % 51.21 % 40.23 % 0.17 s GPU @ 3.0 Ghz (Python)
2 VxNet(LiDAR)
This method makes use of Velodyne laser scans.
33.69 % 39.48 % 31.51 % 0.23 s GPU @ 2.5 Ghz (Python + C/C++)
3 AVOD
This method makes use of Velodyne laser scans.
31.51 % 38.28 % 26.98 % 0.08 s Titan X (pascal)
4 3dSSD 17.35 % 20.22 % 17.20 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
5 LMNetV2
This method makes use of Velodyne laser scans.
11.46 % 13.64 % 11.57 % 0.02 s GPU @ 2.5 Ghz (C/C++)
6 LMnet
This method makes use of Velodyne laser scans.
2.13 % 3.62 % 2.21 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
7 3D-Mono 0.69 % 0.69 % 0.65 % 0.1 s 1 core @ 2.5 Ghz (Python + C/C++)
8 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Cyclist


Method Setting Code Moderate Easy Hard Runtime Environment
1 F-PointNet
This method makes use of Velodyne laser scans.
56.77 % 71.96 % 50.39 % 0.17 s GPU @ 3.0 Ghz (Python)
2 VxNet(LiDAR)
This method makes use of Velodyne laser scans.
48.36 % 61.22 % 44.37 % 0.23 s GPU @ 2.5 Ghz (Python + C/C++)
3 AVOD
This method makes use of Velodyne laser scans.
44.90 % 60.11 % 38.80 % 0.08 s Titan X (pascal)
4 LMNetV2
This method makes use of Velodyne laser scans.
3.23 % 2.84 % 3.28 % 0.02 s GPU @ 2.5 Ghz (C/C++)
5 LMnet
This method makes use of Velodyne laser scans.
0.32 % 0.29 % 0.35 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
6 3dSSD 0.24 % 0.25 % 0.25 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
7 3D-Mono 0.22 % 0.22 % 0.22 % 0.1 s 1 core @ 2.5 Ghz (Python + C/C++)
8 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Related Datasets

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Geiger2012CVPR,
  author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
  title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012}
}



eXTReMe Tracker