\begin{tabular}{c | c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf iRMSE} & {\bf iMAE} & {\bf RMSE} & {\bf MAE} & {\bf Runtime} & {\bf Environment}\\ \hline
GuideNet & & 2.25 & 0.99 & 736.24 & 218.83 & 0.14 s / GPU & J. Tang, F. Tian, W. Feng, J. Li and P. Tan: Learning Guided Convolutional Network for Depth Completion. arXiv preprint arXiv:1908.01238 2019.\\
RN & & 2.16 & 0.94 & 740.72 & 211.42 & 0.11 s / GPU & \\
NLSPN & & 1.99 & 0.84 & 741.68 & 199.59 & 0.22 s / GPU & \\
CSPN++ & & 2.07 & 0.90 & 743.69 & 209.28 & 0.2 s / 1 core & X. Cheng, P. Wang, G. Chenye and R. Yang: CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) 2020.\\
MPMSGNet & & 2.21 & 0.96 & 746.36 & 215.39 & 0.04 s / GPU & \\
& & 2.15 & 0.93 & 748.24 & 211.46 & / & \\
ACMGPNet & & 2.18 & 0.94 & 749.91 & 212.06 & 0.2 s / GPU & \\
ACMGPNet & & 2.21 & 0.94 & 750.01 & 212.32 & 0.2 s / GPU & \\
LCH & & 2.29 & 1.01 & 752.39 & 218.45 & 0.02 s / GPU & \\
UberATG-FuseNet & & 2.34 & 1.14 & 752.88 & 221.19 & 0.09 s / GPU & Y. Chen, B. Yang, M. Liang and R. Urtasun: Learning Joint 2D-3D Representations for Depth Completion. ICCV 2019.\\
& & 2.27 & 0.96 & 757.00 & 217.88 & / & \\
RM & & 2.30 & 0.99 & 757.77 & 218.70 & 0.012 s / GPU & \\
DeepLiDAR & & 2.56 & 1.15 & 758.38 & 226.50 & 0.07s / GPU & J. Qiu, Z. Cui, Y. Zhang, X. Zhang, S. Liu, B. Zeng and M. Pollefeys: DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
S&CNet & & 2.12 & 0.90 & 759.64 & 210.33 & 0.018 s / GPU & \\
FSPN & & 2.41 & 1.06 & 761.61 & 224.51 & 0.02 s / 1 core & \\
MSG-CHN & & 2.30 & 0.98 & 762.19 & 220.41 & 0.01 s / GPU & A. Li, Z. Yuan, Y. Ling, W. Chi, C. Zhang and . others: A Multi-Scale Guided Cascade Hourglass Network for Depth Completion. The IEEE Winter Conference on Applications of Computer Vision 2020.\\
Enhanced Net & & 2.16 & 0.92 & 763.46 & 213.49 & 0.02 s / GPU & \\
CA\_Fusion & & 2.42 & 1.08 & 764.61 & 227.81 & 0.02 s / 1 core & \\
DSPN & & 2.47 & 1.03 & 766.74 & 220.36 & 0.34 s / 1 core & \\
enhanced std & & 2.40 & 1.08 & 772.66 & 231.89 & 0.08 s / GPU & \\
RGB\_guide&certainty & & 2.19 & 0.93 & 772.87 & 215.02 & 0.02 s / GPU & W. Van Gansbeke, D. Neven, B. De Brabandere and L. Van Gool: Sparse and noisy LiDAR completion with RGB guidance and uncertainty. International Conference on Machine Vision Applications (MVA) 2019.\\
RDESR & & 2.16 & 0.97 & 776.13 & 225.38 & 0.11 s / GPU & \\
PwP & & 2.42 & 1.13 & 777.05 & 235.17 & 0.1 s / GPU & H. Yan Xu: Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints. Proceedings of the IEEE International Conference on Computer Vision 2019.\\
GRNet & & 2.70 & 1.28 & 777.66 & 240.39 & 0.2 s / GPU & \\
UINet & & 2.52 & 1.07 & 785.57 & 235.73 & 0.02 s / GPU & \\
NGUI & & 2.47 & 1.01 & 793.22 & 228.41 & 0.2 s / 1 core & \\
3dDepthNet & & 2.36 & 1.02 & 798.44 & 226.27 & 0.03 s / 1 core & \\
ABCD & & 2.60 & 1.09 & 802.76 & 236.99 & 0.2 s / 1 core & \\
MAFN & & 3.02 & 1.48 & 803.50 & 279.37 & 0.02 s / GPU & \\
ABCD & & 2.53 & 1.07 & 806.61 & 234.61 & 0.2 s / 1 core & \\
CrossGuidance & & 2.73 & 1.33 & 807.42 & 253.98 & 0.2 s / 1 core & S. Lee, J. Lee, D. Kim and J. Kim: Deep Architecture with Cross Guidance Between Single Image and Sparse LiDAR Data for Depth Completion. IEEE Access 2020.\\
WPI\_ResNet18 & & 2.69 & 1.07 & 808.80 & 228.07 & 0.04 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
FastCompletion & & 2.62 & 1.04 & 813.10 & 234.31 & 0.08 s / 1 core & \\
Sparse-to-Dense (gd) & & 2.80 & 1.21 & 814.73 & 249.95 & 0.08 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
& & 2.36 & 0.99 & 817.54 & 231.22 & / & \\
MMDC-NET(gd) & & 2.73 & 1.20 & 818.42 & 252.48 & 1 s / 1 core & \\
Bilateral & & 2.24 & 0.94 & 827.67 & 223.04 & 0.1 s / GPU & \\
NConv-CNN-L2 (gd) & & 2.60 & 1.03 & 829.98 & 233.26 & 0.02 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Confidence propagation through cnns for guided sparse depth regression. IEEE transactions on pattern analysis and machine intelligence 2019.\\
sp & & 2.67 & 1.17 & 830.35 & 257.05 & 0.01 s / GPU & \\
MSFF-Net & & 2.81 & 1.18 & 832.90 & 247.15 & 0.06 s / GPU & \\
DDP & & 2.10 & 0.85 & 832.94 & 203.96 & 0.08 s / GPU & Y. Yang, A. Wong and S. Soatto: Dense depth posterior (ddp) from single image and sparse range. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.\\
SSGP & & 2.57 & 1.10 & 844.12 & 248.66 & 0.14 s / GPU & \\
spatial\_enhancer & & 2.29 & 0.94 & 848.92 & 222.84 & 0.02 s / 1 core & \\
NConv-CNN-L1 (gd) & & 2.52 & 0.92 & 859.22 & 207.77 & 0.02 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Confidence propagation through cnns for guided sparse depth regression. IEEE transactions on pattern analysis and machine intelligence 2019.\\
temp & & 2.78 & 1.10 & 868.95 & 245.18 & 0.05 s / 1 core & \\
LSF2 & & 3.40 & 1.29 & 884.97 & 255.23 & 0.04 s / 1 core & \\
RR & & 2.38 & 1.09 & 894.18 & 268.48 & 0.02 s / 1 core & \\
RR\_rf & & 2.40 & 1.10 & 896.99 & 270.96 & 0.02 s / 1 core & \\
IR\_L2 & & 4.92 & 1.35 & 901.43 & 292.36 & 0.05 s / GPU & K. Lu, N. Barnes, S. Anwar and L. Zheng: From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2020.\\
Spade-RGBsD & & 2.17 & 0.95 & 917.64 & 234.81 & 0.07 s / GPU & M. Jaritz, R. Charette, E. Wirbel, X. Perrotton and F. Nashashibi: Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. International Conference on 3D Vision (3DV) 2018.\\
glob\_guide&certainty & & 2.80 & 1.07 & 922.93 & 249.11 & 0.02 s / GPU & W. Van Gansbeke, D. Neven, B. De Brabandere and L. Van Gool: Sparse and noisy LiDAR completion with RGB guidance and uncertainty. International Conference on Machine Vision Applications (MVA) 2019.\\
WPI & & 3.03 & 1.11 & 937.41 & 250.72 & 0.04 s / GPU & \\
& & 3.52 & 1.12 & 937.41 & 250.72 & / & \\
Branch-net & & 3.20 & 1.43 & 941.49 & 306.97 & 0.1 s / GPU & \\
DFineNet & & 3.21 & 1.39 & 943.89 & 304.17 & 0.02 s / GPU & Y. Zhang, T. Nguyen, I. Miller, S. Shivakumar, S. Chen, C. Taylor and V. Kumar: DFineNet: Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Input with RGB Guidance. CoRR 2019.\\
LSF & & 2.84 & 1.07 & 951.02 & 231.59 & 0.1 s / GPU & \\
Sparse-to-Dense (d) & & 3.21 & 1.35 & 954.36 & 288.64 & 0.04 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
pNCNN & & 3.37 & 1.05 & 960.05 & 251.77 & 0.02 s / 1 core & \\
Conf-Net & & 3.10 & 1.09 & 962.28 & 257.54 & 0.02 s / GPU & H. Hekmatian, S. Al-Stouhi and J. Jin: Conf-Net: Predicting Depth Completion Error-Map For High-Confidence Dense 3D Point- Cloud. 2019.\\
DT\_Physical & & 3.09 & 1.16 & 965.65 & 261.86 & 0.04 s / & ERROR: Wrong syntax in BIBTEX file.\\
DCrgb\_80b\_3coef & & 2.43 & 0.98 & 965.87 & 215.75 & 0.15 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
DCd\_all & & 2.87 & 1.13 & 988.38 & 252.21 & 0.1 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
LDepthNet & & 3.30 & 1.53 & 991.45 & 316.89 & 0.04 s / GPU & \\
OptComp & & 2.51 & 0.90 & 999.61 & 226.72 & 0.1 s / 1 core & \\
CSPN & & 2.93 & 1.15 & 1019.64 & 279.46 & 1 s / GPU & X. Cheng, P. Wang and R. Yang: Depth estimation via affinity learned with convolutional spatial propagation network. Proceedings of the European Conference on Computer Vision (ECCV) 2018.X. Cheng, P. Wang and R. Yang: Learning Depth with Convolutional Spatial Propagation Network. arXiv preprint arXiv:1810.02695 2018.\\
MsCNN & & 3.62 & 1.36 & 1034.39 & 301.15 & 0.02 s / GPU & \\
Spade-sD & & 2.60 & 0.98 & 1035.29 & 248.32 & 0.04 s / GPU & M. Jaritz, R. Charette, E. Wirbel, X. Perrotton and F. Nashashibi: Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. International Conference on 3D Vision (3DV) 2018.\\
Morph-Net & & 3.84 & 1.57 & 1045.45 & 310.49 & 0.17 s / GPU & M. Dimitrievski, P. Veelaert and W. Philips: Learning morphological operators for depth completion. Advanced Concepts for Intelligent Vision Systems 2018.\\
LiMono & & 3.49 & 1.73 & 1058.06 & 371.86 & 0.1 s / 1 core & \\
VLDepthNet & & 3.47 & 1.22 & 1067.48 & 283.14 & 0.03 s / GPU & \\
DAS & & 3.53 & 1.19 & 1095.26 & 280.42 & 0.1 s / 1 core & \\
DCd\_3 & & 2.95 & 1.07 & 1109.04 & 234.01 & 0.1 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
IMat & & 3.59 & 1.24 & 1111.39 & 284.25 & 0.1 s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
ScaffFusion & & 3.30 & 1.15 & 1121.93 & 280.76 & 0.2s / 1 core & ERROR: Wrong syntax in BIBTEX file.\\
TSD & & 3.31 & 1.16 & 1123.37 & 281.56 & 0.1 s / 1 core & \\
AdaDC & & 3.57 & 1.19 & 1151.11 & 295.17 & 0.02 s / GPU & \\
temp\_1 & & 6.64 & 1.86 & 1165.46 & 374.67 & 0.02 s / 1 core & \\
VOICED & & 3.56 & 1.20 & 1169.97 & 299.41 & 0.02 s / 1 core & A. Wong, X. Fei, S. Tsuei and S. Soatto: Unsupervised Depth Completion from Visual Inertial Odometry. IEEE Robotics and Automation Letters 2020.\\
DFuseNet & & 3.62 & 1.79 & 1206.66 & 429.93 & 0.08 s / GPU & S. Shivakumar, T. Nguyen, S. Chen and C. Taylor: DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion. arXiv preprint arXiv:1902.00761 2019.\\
NN+CNN2 & & 12.80 & 1.43 & 1208.87 & 317.76 & 0.2 s / GPU & \\
NN\_removal & & 4.08 & 1.34 & 1245.13 & 315.53 & 0.018 s / 1 core & \\
NConv-CNN (d) & & 4.67 & 1.52 & 1268.22 & 360.28 & 0.01 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Propagating Confidences through CNNs for Sparse Data Regression. 2018.\\
SSDEwSDLPC & & 4.41 & 1.65 & 1276.94 & 345.45 & 0.02 s / GPU & \\
IP-Basic & & 3.78 & 1.29 & 1288.46 & 302.60 & 0.011 s / 1 core & J. Ku, A. Harakeh and S. Waslander: In Defense of Classical Image Processing: Fast Depth Completion on the CPU. 2018 15th Conference on Computer and Robot Vision (CRV) 2018.\\
Sparse2Dense(w/o gt) & & 4.07 & 1.57 & 1299.85 & 350.32 & 0.08 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
ADNN & & 59.39 & 3.19 & 1325.37 & 439.48 & .04 s / GPU & S. Nathaniel Chodosh: Deep Convolutional Compressed Sensing for LiDAR Depth Completion. Asian Conference on Computer Vision (ACCV) 2018.\\
MMDC-NET & & 4.39 & 1.61 & 1339.74 & 353.27 & 0.1 s / 1 core & \\
NG\_Depth & & 3.82 & 1.28 & 1372.45 & 297.31 & 0.8 s / 1 core & P. An, Y. Gao, J. Ma, J. Liang, B. Fang, K. Yu and T. Ma: Non-learning Normal Guided Depth Completion Method for LiDAR-Camera System. Submitted... 2020.\\
NN+CNN & & 3.25 & 1.29 & 1419.75 & 416.14 & 0.02 s / & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
B-ADT & & 4.16 & 1.23 & 1480.36 & 298.72 & 0.120 sec. / & \\
SparseConvs & & 4.94 & 1.78 & 1601.33 & 481.27 & 0.01 s / & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
NadarayaW & & 6.34 & 1.84 & 1852.60 & 416.77 & 0.05 s / 1 core & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
SGDU & & 7.38 & 2.05 & 2312.57 & 605.47 & 0.2 s / 4 cores & N. Schneider, L. Schneider, P. Pinggera, U. Franke, M. Pollefeys and C. Stiller: Semantically Guided Depth Upsampling. German Conference on Pattern Recognition 2016.\\
Dense-Subpixel & & 6.13 & 2.43 & 2379.07 & 671.86 & 0.5 s / 1 core &
\end{tabular}