Depth Prediction Evaluation



The depth completion and depth prediction evaluation are related to our work published in Sparsity Invariant CNNs (THREEDV 2017). It
contains over 93 thousand depth maps with corresponding raw LiDaR scans and RGB images, aligned with the "raw data" of the KITTI dataset.
Given the large amount of training data, this dataset shall allow a training of complex deep learning models for the tasks of depth completion
and single image depth prediction. Also, we provide manually selected images with unpublished depth maps to serve as a benchmark for those
two challenging tasks.

The structure of all provided depth maps is aligned with the structure of our raw data to easily find corresponding left and right images,
or other provided information.


Note: On 12.04.2018 we have fixed a small error in the file data_depth_velodyne.zip, please download this file again if you have an old version.


All methods providing less than 100 % density have been interpolated using simple background interpolation as explained in the corresponding header file in the development kit.

    Our evaluation table ranks all methods according to square root of the scale invariant logarithmic error (SILog).
    However, we also provide other metrics:
  • SILog:            Scale invariant logarithmic error [log(m)*100] (for more info click on the formula below)

  • sqErrorRel:    Relative squared error (percent)
  • absErrorRel:  Relative absolute error (percent)
  • iRMSE:           Root mean squared error of the inverse depth [1/km]


Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Additional training data: Use of additional data sources for training (see details)

Method Setting Code SILog sqErrorRel absErrorRel iRMSE Runtime Environment
1 BTS code 11.67 2.21 9.04 12.23 0.1 s GPU @ 2.5 Ghz (Python + C/C++)
J. Lee, M. Han, D. Ko and I. Suh: From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation. arXiv:1907.10326 2019.
2 DL_61 (DORN) code 11.77 2.23 8.78 12.98 0.5 s GPU @ 2.5 Ghz (Python + C/C++)
H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression Network for Monocular Depth Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
3 DL_SORD_SL 12.39 2.49 10.10 13.48 0.8 s GPU @ 2.5 Ghz (Python + C/C++)
R. Diaz and A. Marathe: Soft Labels for Ordinal Regression. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
4 BTS-256 12.47 2.70 9.92 13.42 0.1 s GPU @ 2.5 Ghz (Python + C/C++)
5 VNL code 12.65 2.46 10.15 13.02 0.5 s 1 core @ 2.5 Ghz (C/C++)
Y. Wei, Y. Liu, C. Shen and Y. Yan: Enforcing geometric constraints of virtual normal for depth prediction. 2019.
6 DS-SIDENet_ROB 12.86 2.87 10.03 14.40 0.35 s GPU @ 2.5 Ghz (Python)
H. Ren, M. El-Khamy and J. Lee: Deep Robust Single Image Depth Estimation Neural Network Using Scene Understanding. IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW) 2019.
7 DL_SORD_SQ 13.00 2.95 10.38 13.78 0.88 s GPU @ 2.5 Ghz (Python + C/C++)
R. Diaz and A. Marathe: Soft Labels for Ordinal Regression. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
8 PAP 13.08 2.72 10.27 13.95 0.18 s GPU @ 2.5 Ghz (Python + C/C++)
Z. Zhang, Z. Cui, C. Xu, Y. Yan, N. Sebe and J. Yang: Pattern-Affinitive Propagation Across Depth, Surface Normal and Semantic Segmentation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
9 GN-NET 13.35 2.88 10.22 13.61 0.12 s 1 core @ 1.5 Ghz (Python)
10 VGG16-UNet 13.41 2.86 10.60 15.06 0.16 s GPU @ 2.5 Ghz (Python + C/C++)
X. Guo, H. Li, S. Yi, J. Ren and X. Wang: Learning monocular depth by distilling cross-domain stereo networks. Proceedings of the European Conference on Computer Vision (ECCV) 2018.
11 AcED 13.45 3.63 10.61 13.94 0.5 s GPU @ 2.5 Ghz (Python)
12 DORN_ROB 13.53 3.06 10.35 15.96 2 s GPU @ 2.5 Ghz (Python)
H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression Network for Monocular Depth Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
13 MonoS code 14.04 3.26 11.01 15.57 0.1 s 1 core @ 2.5 Ghz (Python)
14 BMMNet 14.37 5.10 10.92 15.51 0.074 s GPU @ 2.0 Ghz (Python + C/C++)
15 CARN 14.44 3.63 13.33 17.75 0.1 s GPU @ 2.5 Ghz (Python)
16 DABC_ROB 14.49 4.08 12.72 15.53 0.7 s GPU @ 2.0 Ghz (Matlab)
R. Li, K. Xian, C. Shen, Z. Cao, H. Lu and L. Hang: Deep attention-based classification network for robust depth prediction. Proceedings of the Asian Conference on Computer Vision (ACCV) 2018.
17 SDNet code 14.68 3.90 12.31 15.96 0.2 s GPU @ 2.5 Ghz (C/C++)
M. Ochs, A. Kretz and R. Mester: SDNet: Semantic Guided Depth Estimation Network. German Conference on Pattern Recognition (GCPR) 2019.
18 APMoE_base_ROB code 14.74 3.88 11.74 15.63 0.2 s GPU @ 3.5 Ghz (Matlab), Geforce Titan X
S. Kong and C. Fowlkes: Pixel-wise Attentional Gating for Parsimonious Pixel Labeling. arxiv 1805.01556 2018.
19 CSWS_E_ROB 14.85 3.48 11.84 16.38 0.2 s 1 core @ 2.5 Ghz (C/C++), Titian GTX 108
M. Bo Li: Monocular Depth Estimation with Hierarchical Fusion of Dilated CNNs and Soft-Weighted-Sum Inference. 2018.
20 HBC 15.18 3.79 12.33 17.86 0.05 s GPU @ 2.5 Ghz (Python)
21 DORN-t code 15.22 5.05 11.86 16.34 0.1 s 1 core @ 2.5 Ghz (Python)
22 semiDepth code 15.34 4.20 11.73 16.66 0.02 s GPU @ 2.5 Ghz (Python)
23 DHGRL 15.47 4.04 12.52 15.72 0.2 s GPU @ 2.5 Ghz (Python)
Z. Zhang, C. Xu, J. Yang, Y. Tai and L. Chen: Deep hierarchical guidance and regularization learning for end-to-end depth estimation. Pattern Recognition 2018.
24 GN-NET 15.53 3.29 11.63 16.04 0.12 s GPU @ 1.5 Ghz (Python)
25 FCRN_ROB 15.93 4.06 12.10 16.51 0.2 s 1 core @ 2.5 Ghz (Python)
26 MultiDepth 16.05 3.89 13.82 18.21 0.01 s GPU @ 1.5 Ghz (Python)
L. Liebel and M. Körner: MultiDepth: Single-Image Depth Estimation via Multi-Task Regression and Classification. IEEE Intelligent Transportation Systems Conference (ITSC) 2019 (to appear).
27 AI Mono Tech. code 17.21 6.98 13.60 16.80 0.04 s 1 core @ 1.5 Ghz (Python)
28 Modu_selfdriving_ROB 17.54 7.69 14.61 17.77 0.1 s GPU @ >3.5 Ghz (Python)
29 LSIM 17.92 6.88 14.04 17.62 0.08 s GPU @ 2.5 Ghz (Python)
M. Goldman, T. Hassner and S. Avidan: Learn Stereo, Infer Mono: Siamese Networks for Self-Supervised, Monocular, Depth Estimation. Computer Vision and Pattern Recognition Workshops (CVPRW) 2019.
30 FCRN 22.91 10.95 18.33 24.96 0.1 s 1 core @ 2.5 Ghz (C/C++)
31 TDT 28.64 25.45 28.24 30.10 0.4 s 1 core @ 2.5 Ghz (C/C++)
32 DSA 31.09 6.09 14.19 65.97 0.1 s 1 core @ 2.5 Ghz (Python)
33 PSM-Cross 35.89 173.68 91.19 52.42 0.45 s GPU @ 2.5 Ghz (Python)
34 RVGNet_ROB 37.71 10.66 23.39 62.48 0.3 s 1 core @ 2.5 Ghz (C/C++)
35 RVGNet 40.91 13.35 28.03 44.54 0.3 s GPU @ 2.5 Ghz (C/C++)
Table as LaTeX | Only published Methods




Related Datasets

  • SYNTHIA Dataset: SYNTHIA is a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations as well as pixel-wise depth information. The dataset consists of +200,000 HD images from video streams and +20,000 HD images from independent snapshots.
  • Middlebury Stereo Evaluation: The classic stereo evaluation benchmark, featuring four test images in version 2 of the benchmark, with very accurate ground truth from a structured light system. 38 image pairs are provided in total.
  • Make3D Range Image Data: Images with small-resolution ground truth used to learn and evaluate depth from single monocular images.
  • Virtual KITTI Dataset: Virtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions.
  • Scene Flow Dataset: The Freiburg Scene Flow Dataset collection has been used to train convolutional networks for disparity, optical flow, and scene flow estimation. The collection contains more than 39000 stereo frames in 960x540 pixel resolution, rendered from various synthetic sequences.

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Uhrig2017THREEDV,
  author = {Jonas Uhrig and Nick Schneider and Lukas Schneider and Uwe Franke and Thomas Brox and Andreas Geiger},
  title = {Sparsity Invariant CNNs},
  booktitle = {International Conference on 3D Vision (3DV)},
  year = {2017}
}



eXTReMe Tracker