Bird's Eye View Evaluation 2017


The bird's eye view benchmark consists of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labeled objects. For evaluation, we compute precision-recall curves. To rank the methods we compute average precision. We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing the label files.

We evaluate bird's eye view detection performance using the PASCAL criteria also used for 2D object detection. Far objects are thus filtered based on their bounding box height in the image plane. As only objects also appearing on the image plane are labeled, objects in don't car areas do not count as false positives. We note that the evaluation does not take care of ignoring detections that are not visible on the image plane — these detections might give rise to false positives. For cars we require a bounding box overlap of 70% in bird's eye view, while for pedestrians and cyclists we require an overlap of 50%. Difficulties are defined as follows:

  • Easy: Min. bounding box height: 40 Px, Max. occlusion level: Fully visible, Max. truncation: 15 %
  • Moderate: Min. bounding box height: 25 Px, Max. occlusion level: Partly occluded, Max. truncation: 30 %
  • Hard: Min. bounding box height: 25 Px, Max. occlusion level: Difficult to see, Max. truncation: 50 %

All methods are ranked based on the moderately difficult results.

Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Flow: Method uses optical flow (2 temporally adjacent images)
  • Multiview: Method uses more than 2 temporally adjacent images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • Additional training data: Use of additional data sources for training (see details)

Car


Method Setting Code Moderate Easy Hard Runtime Environment
1 MV3D (LIDAR)
This method makes use of Velodyne laser scans.
77.00 % 85.82 % 68.94 % 0.24 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
2 MV3D
This method makes use of Velodyne laser scans.
76.90 % 86.02 % 68.49 % 0.36 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
3 F-PC_CNN
This method makes use of Velodyne laser scans.
69.77 % 77.05 % 62.59 % 0.5 s GPU @ 3.0 Ghz (Matlab + C/C++)
4 AVOD
This method makes use of Velodyne laser scans.
67.82 % 77.77 % 59.87 % 0.12 s GPU @ 1.5 Ghz (Python)
5 3D FCN
This method makes use of Velodyne laser scans.
62.54 % 69.94 % 55.94 % >5 s 1 core @ 2.5 Ghz (C/C++)
B. Li: 3D Fully Convolutional Network for Vehicle Detection in Point Cloud. IROS 2017.
6 SDN
This method makes use of Velodyne laser scans.
55.29 % 73.82 % 48.48 % 0.07 s GPU @ 1.5 Ghz (Python)
7 LMNetV2
This method makes use of Velodyne laser scans.
37.12 % 39.83 % 32.00 % 0.02 s GPU @ 2.5 Ghz (C/C++)
8 TCD 36.95 % 36.49 % 38.10 % 0.6 s GPU @ 2.5 Ghz (Python + C/C++)
9 3dSSD 25.01 % 24.11 % 28.81 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
10 LMnet
This method makes use of Velodyne laser scans.
22.05 % 24.80 % 18.70 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
11 CSoR
This method makes use of Velodyne laser scans.
18.69 % 23.94 % 16.30 % 3.5 s 4 cores @ >3.5 Ghz (Python + C/C++)
L. Plotkin: PyDriver: Entwicklung eines Frameworks für räumliche Detektion und Klassifikation von Objekten in Fahrzeugumgebung. 2015.
12 SPC
This method makes use of Velodyne laser scans.
5.07 % 7.28 % 4.19 % 0.4 s 4 cores @ 2.5 Ghz (Python)
13 VeloFCN
This method makes use of Velodyne laser scans.
0.33 % 0.15 % 0.47 % 1 s GPU @ 2.5 Ghz (Python + C/C++)
B. Li, T. Zhang and T. Xia: Vehicle Detection from 3D Lidar Using Fully Convolutional Network. RSS 2016 .
14 LidarNet
This method makes use of Velodyne laser scans.
0.14 % 0.06 % 0.17 % 0.007 s GPU @ 2.5 Ghz (C/C++)
15 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Pedestrian


Method Setting Code Moderate Easy Hard Runtime Environment
1 3dSSD 23.98 % 27.42 % 22.37 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
2 LMNetV2
This method makes use of Velodyne laser scans.
12.71 % 15.07 % 12.41 % 0.02 s GPU @ 2.5 Ghz (C/C++)
3 LMnet
This method makes use of Velodyne laser scans.
2.31 % 4.23 % 2.44 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
4 mBoW
This method makes use of Velodyne laser scans.
0.01 % 0.01 % 0.01 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Cyclist


Method Setting Code Moderate Easy Hard Runtime Environment
1 LMNetV2
This method makes use of Velodyne laser scans.
5.02 % 6.12 % 5.10 % 0.02 s GPU @ 2.5 Ghz (C/C++)
2 LMnet
This method makes use of Velodyne laser scans.
0.88 % 0.88 % 0.88 % 0.1 s GPU @ 1.1 Ghz (Python + C/C++)
3 3dSSD 0.38 % 0.34 % 9.09 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
4 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Related Datasets

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Geiger2012CVPR,
  author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
  title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012}
}



eXTReMe Tracker