\begin{tabular}{c | c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf iRMSE} & {\bf iMAE} & {\bf RMSE} & {\bf MAE} & {\bf Runtime} & {\bf Environment}\\ \hline
BP-Net & & 1.82 & 0.84 & 684.90 & 194.69 & 0.15 s / GPU & J. Tang, F. Tian, B. An, J. Li and P. Tan: Bilateral Propagation Network for Depth Completion. CVPR 2024.\\
ImprovingDC & & 1.83 & 0.81 & 686.46 & 187.95 & 0.1 s / 8 cores & \\
SPN & & 1.86 & 0.83 & 692.50 & 194.18 & 0.3 s / GPU & \\
TPVD & & 1.82 & 0.81 & 693.97 & 188.60 & 0.01 s / GPU & Z. Yan, Y. Lin, K. Wang, Y. Zheng, Y. Wang, Z. Zhang, J. Li and J. Yang: Tri-Perspective View Decomposition for Geometry-Aware Depth Completion. CVPR (oral) 2024.\\
HFFNet & & 2.04 & 0.92 & 694.04 & 205.59 & 0.03 s / 1 core & \\
GMDepth & & 1.87 & 0.83 & 694.19 & 192.52 & 0.1 s / 1 core & \\
RigNet++ & & 1.82 & 0.81 & 694.24 & 188.62 & 0.06 s / GPU & Z. Yan, X. Li, Z. Zhang, J. Li and J. Yang: RigNet++: Efficient Repetitive Image Guided Network for Depth Completion. arXiv preprint arXiv:2309.00655 2023.\\
LRRU-Base-L2 & & 2.18 & 0.86 & 695.67 & 198.31 & 0.12 s / 8 cores & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
LRRU-Base-L2+L1 & & 1.87 & 0.81 & 696.51 & 189.96 & 0.12 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
BEV@DC & & 1.83 & 0.82 & 697.44 & 189.44 & 0.1 s / 1 core & W. Zhou, X. Yan, Y. Liao, Y. Lin, J. Huang, G. Zhao, S. Cui and Z. Li: BEVDC: Bird's-Eye View Assisted Training for Depth Completion. CVPR 2023.\\
spn & & 1.87 & 0.83 & 697.59 & 195.01 & 0.12 s / 1 core & \\
NDDepth & & 1.89 & 0.83 & 698.71 & 192.75 & 0.1 s / 1 core & S. Shao, Z. Pei, W. Chen, P. Chen and Z. Li: NDDepth: Normal-Distance Assisted Monocular Depth Estimation and Completion. arXiv:2311.07166 2023.\\
LM & & 1.89 & 0.83 & 700.23 & 194.67 & 0.09 s / 1 core & \\
IEBins & & 1.90 & 0.82 & 700.33 & 192.54 & 0.1 s / 1 core & \\
GFormer & & 1.92 & 0.82 & 702.64 & 190.86 & 0.02 s / GPU & \\
ImprovingDual-branch & & 1.99 & 0.89 & 706.23 & 199.14 & 0.1 s / 8 cores & \\
GCANet-accurate & & 2.14 & 0.97 & 707.53 & 213.04 & 0.047s / & \\
Decomposition B & & 2.05 & 0.91 & 707.93 & 205.11 & 0.1 s / GPU & Y. Wang, Y. Mao, Q. Liu and Y. Dai: Decomposed Guided Dynamic Filters for Efficient RGB-Guided Depth Completion. TCSVT 2023.\\
Decomposition A & & 2.04 & 0.91 & 708.30 & 205.01 & 0.1 s / GPU & Y. Wang, Y. Mao, Q. Liu and Y. Dai: Decomposed Guided Dynamic Filters for Efficient RGB-Guided Depth Completion. TCSVT 2023.\\
OGNI-DC L1+L2 & & 1.86 & 0.83 & 708.38 & 193.20 & 0.2 s / GPU & \\
CompletionFormer & & 2.01 & 0.88 & 708.87 & 203.45 & 0.12 s / GPU & Y. Zhang, X. Guo, M. Poggi, Z. Zhu, G. Huang and S. Mattoccia: CompletionFormer: Depth Completion with Convolutions and Vision Transformers. CVPR 2023.\\
MTSF-HDCN & & 2.13 & 1.00 & 709.04 & 213.95 & 0.03 s / 1 core & \\
DySPN & & 1.88 & 0.82 & 709.12 & 192.71 & 0.16 s / GPU & Y. Lin, T. Cheng, Q. Zhong, W. Zhou and H. Yang: Dynamic Spatial Propagation Network for Depth Completion. Proceedings of the AAAI Conference on Artificial Intelligence 2022.\\
SemAttNet & & 2.03 & 0.90 & 709.41 & 205.49 & 0.2 s / 1 core & D. Nazir, A. Pagani, M. Liwicki, D. Stricker and M. Afzal: SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion. IEEE Access 2022.\\
eR & & 2.01 & 0.89 & 710.85 & 202.45 & 0.1 s / 1 core & \\
GCANet-fast+CSPN++ & & 2.10 & 0.90 & 711.08 & 204.44 & 0.086s / & \\
RigNet & & 2.08 & 0.90 & 712.66 & 203.25 & 0.20 s / GPU & Z. Yan, K. Wang, X. Li, Z. Zhang, J. Li and J. Yang: RigNet: Repetitive Image Guided Network for Depth Completion. ECCV 2022.\\
LRRU-Small & & 2.01 & 0.88 & 713.64 & 203.60 & 0.05 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
NWKNet & & 2.44 & 1.20 & 714.28 & 225.08 & 0.12 s / 1 core & \\
GCANet\_acc+CSPN++ & & 2.08 & 0.90 & 714.47 & 206.97 & 0.105s / & \\
MEDO-n & & 2.04 & 0.90 & 715.87 & 207.59 & 0.08 s / GPU & \\
MEDO & & 2.03 & 0.89 & 717.00 & 207.59 & 0.05 s / 1 core & \\
LRRU-Small-L2+L1 & & 1.96 & 0.85 & 717.50 & 197.72 & 0.06 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
GCANet-middle & & 2.31 & 0.99 & 717.71 & 213.11 & 0.027s / & \\
HUGNet-NL & & 1.92 & 0.84 & 718.73 & 195.65 & 0.21 s / GPU & \\
Improving Single-bra & & 2.06 & 0.91 & 719.65 & 201.92 & 0.1 s / 8 cores & \\
MFF-Net & & 2.21 & 0.94 & 719.85 & 208.11 & 0.05 s / GPU & L. Liu, X. Song, J. Sun, X. Lyu, L. Li, Y. Liu and L. Zhang: MFF-Net: Towards Efficient Monocular Depth Completion with Multi-modal Feature Fusion. IEEE Robotics and Automation Letters 2023.\\
MED & & 2.05 & 0.90 & 719.88 & 208.56 & 0.04 s / 1 core & \\
GCANet-fast+NLSPN & & 2.15 & 0.93 & 720.42 & 210.69 & 0.044s / & \\
Dual-branch & & 2.07 & 0.92 & 720.96 & 203.73 & 0.1 s / 8 cores & \\
GCANet-fast & & 2.31 & 1.02 & 720.98 & 218.96 & 0.023s / & \\
Int & & 1.93 & 0.83 & 721.00 & 196.18 & 0.1 s / 1 core & \\
NEWNet & & 2.18 & 0.97 & 722.35 & 213.03 & 0.01 s / 1 core & \\
Light-SEF & & 1.96 & 0.85 & 723.36 & 195.87 & 0.07 s / GPU & \\
NNNet & & 1.99 & 0.88 & 724.14 & 205.57 & 0.03 s / 1 core & J. Liu and C. Jung: NNNet: New Normal Guided Depth Completion from Sparse LiDAR Data and Single Color Image. IEEE Access 2022.\\
HUGNet & & 2.02 & 0.88 & 724.64 & 200.28 & 0.09 s / GPU & \\
hcspn & & 2.08 & 0.91 & 725.19 & 209.91 & 0.08 s / GPU & \\
MRNANet & & 2.25 & 0.98 & 725.44 & 214.07 & 2 s / 1 core & \\
YDNet & & 2.24 & 1.00 & 727.58 & 219.04 & 0.02 s / 1 core & \\
ReDC & & 2.05 & 0.89 & 728.31 & 204.60 & 0.02 s / & X. Sun, J. Ponce and Y. Wang: Revisiting deformable convolution for depth completion. IEEE/RSJ International Conference on Intelligent Robots and Systems 2023.\\
PENet & & 2.17 & 0.94 & 730.08 & 210.55 & 0.032s / GPU & M. Hu, S. Wang, B. Li, S. Ning, L. Fan and X. Gong: PENet: Towards Precise and Efficient Image Guided Depth Completion. ICRA 2021.\\
LRRU-Tiny-L2 & & 2.09 & 0.90 & 732.43 & 209.14 & 0.04 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
ACMNet & & 2.08 & 0.90 & 732.99 & 206.80 & 0.08 s / 1 core & S. Zhao, M. Gong, H. Fu and D. Tao: Adaptive context-aware multi-modal network for depth completion. IEEE Transactions on Image Processing 2021.\\
SPL & & 2.09 & 0.93 & 733.44 & 212.49 & 0.03 s / 1 core & X.Liang and C.Jung: Selective Progressive Learning for Sparse Depth Completion. Proceedings of the International Conference on Pattern Recognition (ICPR2022). 2022.\\
CluDe & & 2.08 & 0.88 & 734.59 & 200.48 & 0.14 s / GPU & S. Chen, H. Zhang, X. Ma, Z. Wang and H. Li: Learning Pixel-wise Continuous Depth Representation via Clustering for Depth Completion. IEEE Transactions on Circuits and Systems for Video Technology 2024.\\
MEDO-l & & 2.14 & 0.93 & 735.36 & 211.75 & 0.05 s / 1 core & \\
FCFR-Net & & 2.20 & 0.98 & 735.81 & 217.15 & 0.1 s / GPU & L. Liu, X. Song, X. Lyu, J. Diao, M. Wang, Y. Liu and L. Zhang: FCFR-Net: Feature Fusion based Coarse- to-Fine Residual Learning for Depth Completion. Proceedings of the AAAI Conference on Artificial Intelligence 2021.\\
test & & 2.04 & 0.87 & 736.14 & 200.89 & 0.15 s / 8 cores & \\
GuideNet & & 2.25 & 0.99 & 736.24 & 218.83 & 0.14 s / GPU & J. Tang, F. Tian, W. Feng, J. Li and P. Tan: Learning Guided Convolutional Network for Depth Completion. IEEE Transactions on Image Processing(TIP) 2020.\\
AF2R Net & & 2.27 & 1.01 & 736.44 & 220.16 & 0.02 s / GPU & \\
MSGD & & 2.12 & 0.93 & 737.14 & 212.44 & 0.05 s / 1 core & \\
test & & 2.06 & 0.86 & 737.83 & 198.82 & 0.14 s / GPU & \\
KeyNet & & 2.27 & 0.98 & 738.09 & 216.93 & 0.04 s / 1 core & \\
MDANet & & 2.12 & 0.99 & 738.23 & 214.99 & 0.03 s / GPU & Y. Ke, K. Li, W. Yang, Z. Xu, D. Hao, L. Huang and G. Wang: MDANet: Multi-Modal Deep Aggregation Network for Depth Completion. 2021 IEEE International Conference on Robotics and Automation (ICRA) 2021.\\
CDCNet & & 2.18 & 0.99 & 738.26 & 216.05 & 0.06 s / GPU & R. Fan, Z. Li, M. Poggi and S. Mattoccia: A Cascade Dense Connection Fusion Network for Depth Completion. BMVC 2022.\\
LRRU-Tiny-L2+L1 & & 2.04 & 0.85 & 738.86 & 200.28 & 0.04 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
test & & 2.02 & 0.86 & 739.04 & 199.60 & 0.14 s / GPU & \\
ENet & & 2.14 & 0.95 & 741.30 & 216.26 & 0.019 s / GPU & M. Hu, S. Wang, B. Li, S. Ning, L. Fan and X. Gong: PENet: Towards Precise and Efficient Image Guided Depth Completion. ICRA 2021.\\
test & & 2.04 & 0.87 & 741.49 & 200.70 & 0.14 s / GPU & \\
GLCA & & 2.32 & 1.02 & 741.61 & 216.24 & 0.01 s / 1 core & \\
NLSPN & & 1.99 & 0.84 & 741.68 & 199.59 & 0.22 s / GPU & J. Park, K. Joo, Z. Hu, C. Liu and I. Kweon: Non-Local Spatial Propagation Network for Depth Completion. European Conference on Computer Vision (ECCV) 2020.\\
CluDe* & & 2.02 & 0.86 & 742.26 & 197.91 & 0.14 s / GPU & S. Chen, H. Zhang, X. Ma, Z. Wang and H. Li: Learning Pixel-wise Continuous Depth Representation via Clustering for Depth Completion. IEEE Transactions on Circuits and Systems for Video Technology 2024.\\
FANet-big48 & & 2.07 & 0.88 & 742.30 & 202.53 & 0.08 s / GPU & \\
CSPN++ & & 2.07 & 0.90 & 743.69 & 209.28 & 0.2 s / 1 core & X. Cheng, P. Wang, G. Chenye and R. Yang: CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) 2020.\\
ACMNet & & 2.08 & 0.90 & 744.91 & 206.09 & 0.08 s / GPU & S. Zhao, M. Gong, H. Fu and D. Tao: Adaptive context-aware multi-modal network for depth completion. IEEE Transactions on Image Processing 2021.\\
FANet-Big55 & & 2.08 & 0.88 & 745.00 & 205.04 & 0.08 s / GPU & \\
Single-branch & & 2.22 & 0.95 & 745.16 & 209.86 & 0.1 s / 8 cores & \\
FANet-big50 & & 2.06 & 0.87 & 745.66 & 201.20 & 0.08 s / GPU & \\
FANet0045 & & 2.07 & 0.88 & 746.37 & 205.72 & 0.06 s / GPU & \\
OGNI-DC L1 & & 1.81 & 0.79 & 747.64 & 182.29 & 0.2 s / GPU & \\
CDCNet-lite & & 2.22 & 0.95 & 748.99 & 215.38 & 0.04 s / GPU & R. Fan, Z. Li, M. Poggi and S. Mattoccia: A Cascade Dense Connection Fusion Network for Depth Completion. BMVC 2022.\\
FANet31 & & 2.04 & 0.87 & 749.36 & 201.56 & 0.06 s / GPU & \\
Ms\_Unc\_UARes-B & & 1.98 & 0.85 & 751.59 & 198.09 & 0.1 s / GPU & Y. Zhu, W. Dong, L. Li, J. Wu, X. Li and G. Shi: Robust Depth Completion with Uncertainty-Driven Loss Functions. accepted by AAAI2022 .\\
UberATG-FuseNet & & 2.34 & 1.14 & 752.88 & 221.19 & 0.09 s / GPU & Y. Chen, B. Yang, M. Liang and R. Urtasun: Learning Joint 2D-3D Representations for Depth Completion. ICCV 2019.\\
LDCNet & & 2.33 & 0.98 & 753.15 & 218.02 & 0.05 s / GPU & Z. Yan, Y. Zheng, C. Li, J. Li and J. Yang: Learnable Differencing Center for Nighttime Depth Perception. 2023.\\
DepthPrompting & & 2.02 & 0.87 & 754.48 & 206.15 & 0.06 s / 1 core & \\
DenseLiDAR & & 2.25 & 0.96 & 755.41 & 214.13 & 0.02 s / 1 core & J. Gu, Z. Xiang, Y. Ye and L. Wang: DenseLiDAR: A Real-Time Pseudo Dense Depth Guided Depth Completion Network. IEEE Robotics and Automation Letters 2021.\\
DepthPrompting & & 2.04 & 0.88 & 756.27 & 206.62 & 0.06 s / 1 core & \\
DepthPrompting & & 2.02 & 0.86 & 756.84 & 204.94 & 0.06 s / 1 core & \\
FANet & & 2.29 & 1.07 & 758.25 & 230.19 & 0.04 s / GPU & \\
DeepLiDAR & & 2.56 & 1.15 & 758.38 & 226.50 & 0.07s / GPU & J. Qiu, Z. Cui, Y. Zhang, X. Zhang, S. Liu, B. Zeng and M. Pollefeys: DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
DP & & 2.03 & 0.87 & 758.50 & 206.55 & 0.06 s / 1 core & \\
DANConv & & 2.17 & 0.92 & 759.65 & 213.68 & 0.05 s / GPU & L. Yan, K. Liu and G. Long: DAN-Conv: Depth aware non-local convolution for LiDAR depth completion. Electronics Letters 2021.\\
fanet\_s50 & & 2.14 & 0.91 & 760.50 & 208.10 & 0.02 s / GPU & \\
MSG-CHN & & 2.30 & 0.98 & 762.19 & 220.41 & 0.01 s / GPU & A. Li, Z. Yuan, Y. Ling, W. Chi, C. Zhang and others: A Multi-Scale Guided Cascade Hourglass Network for Depth Completion. The IEEE Winter Conference on Applications of Computer Vision 2020.\\
ABCD & & 2.29 & 0.97 & 764.61 & 220.86 & 0.02 s / 1 core & Y. Jeon, H. Kim and S. Seo: ABCD: Attentive Bilateral Convolutional Network for Robust Depth Completion. IEEE Robotics and Automation Letters 2021.\\
CompletionFormer & & 1.89 & 0.80 & 764.87 & 183.88 & 0.12 s / GPU & Y. Zhang, X. Guo, M. Poggi, Z. Zhu, G. Huang and S. Mattoccia: CompletionFormer: Depth Completion with Convolutions and Vision Transformers. CVPR 2023.\\
LRRU-Mini-L2 & & 2.26 & 0.94 & 765.95 & 218.31 & 0.03 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
fanet\_light & & 2.15 & 0.93 & 766.41 & 207.31 & 0.027 s / GPU & \\
DSPN & & 2.47 & 1.03 & 766.74 & 220.36 & 0.34 s / 1 core & Z. Xu, H. Yin and J. Yao: Deformable Spatial Propagation Networks For Depth Completion. 2020 IEEE International Conference on Image Processing (ICIP) 2020.\\
ADNet\_small & & 2.07 & 0.88 & 767.17 & 209.44 & 0.05 s / 1 core & \\
RGB\_guide&certainty & & 2.19 & 0.93 & 772.87 & 215.02 & 0.02 s / GPU & W. Van Gansbeke, D. Neven, B. De Brabandere and L. Van Gool: Sparse and noisy LiDAR completion with RGB guidance and uncertainty. International Conference on Machine Vision Applications (MVA) 2019.\\
resnet34al & & 2.16 & 0.89 & 773.71 & 203.35 & 0.09 s / GPU & \\
GAENet(Full) & & 2.29 & 1.08 & 773.90 & 231.29 & 0.05 s / GPU & W. Du, H. Chen, H. Yang and Y. Zhang: Depth Completion using Geometry-Aware Embedding. 2022 IEEE International Conference on Robotics and Automation (ICRA) 2022.\\
LRRU-Mini-L2+L1 & & 2.21 & 0.90 & 774.43 & 210.87 & 0.03 s / GPU & Y. Wang, B. Li, G. Zhang, Q. Liu, G. Tao and Y. Dai: LRRU: Long-short Range Recurrent Updating Networks for Depth Completion. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2023.\\
DVMN & & 2.21 & 0.94 & 776.31 & 220.37 & 0.12 s / GPU & L. Reichardt, P. Mangat and O. Wasenmüller: DVMN: Dense Validity Mask Network for Depth Completion. IEEE International Conference on Intelligent Transportation (ITSC) 2021.\\
PwP & & 2.42 & 1.13 & 777.05 & 235.17 & 0.1 s / GPU & H. Yan Xu: Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints. Proceedings of the IEEE International Conference on Computer Vision 2019.\\
resnet18al & & 2.23 & 0.91 & 782.80 & 206.77 & 0.07 s / GPU & \\
hdcn & & 2.16 & 0.90 & 785.05 & 214.41 & 0.1 s / 1 core & \\
ADNet & & 2.30 & 0.93 & 790.34 & 216.03 & 0.05 s / 1 core & \\
Revisiting & & 2.42 & 0.99 & 792.80 & 225.81 & 0.05 s / GPU & L. Yan, K. Liu and E. Belyaev: Revisiting Sparsity Invariant Convolution: A Network for Image Guided Depth Completion. IEEE Access 2020.\\
Ms\_Unc\_UARes & & 1.98 & 0.83 & 795.61 & 190.88 & 0.08 s / GPU & Y. Zhu, W. Dong, L. Li, J. Wu, X. Li and G. Shi: Robust Depth Completion with Uncertainty-Driven Loss Functions. accepted by AAAI2022 .\\
BA&GC & & 2.44 & 1.05 & 799.31 & 232.98 & 0.05 s / GPU & K. Liu, Q. Li and Y. Zhou: An adaptive converged depth completion network based on efficient RGB guidance. Multimedia Tools and Applications 2022.\\
CrossGuidance & & 2.73 & 1.33 & 807.42 & 253.98 & 0.2 s / 1 core & S. Lee, J. Lee, D. Kim and J. Kim: Deep Architecture with Cross Guidance Between Single Image and Sparse LiDAR Data for Depth Completion. IEEE Access 2020.\\
Sparse-to-Dense (gd) & & 2.80 & 1.21 & 814.73 & 249.95 & 0.08 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
NConv-CNN-L2 (gd) & & 2.60 & 1.03 & 829.98 & 233.26 & 0.02 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Confidence propagation through cnns for guided sparse depth regression. IEEE transactions on pattern analysis and machine intelligence 2019.\\
DDP & & 2.10 & 0.85 & 832.94 & 203.96 & 0.08 s / GPU & Y. Yang, A. Wong and S. Soatto: Dense depth posterior (ddp) from single image and sparse range. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.\\
SSGP & & 2.51 & 1.09 & 838.22 & 244.70 & 0.14 s / & R. Schuster, O. Wasenmüller, C. Unger and D. Stricker: SSGP: Sparse Spatial Guided Propagation for Robust and Generic Interpolation. IEEE Winter Conference on Applications of Computer Vision (WACV) 2021.\\
TWISE & & 2.08 & 0.82 & 840.20 & 195.58 & 0.02 s / GPU & S. Imran, X. Liu and D. Morris: Depth Completion With Twin Surface Extrapolation at Occlusion Boundaries. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021.\\
ScaffFusion-SSL & & 3.24 & 0.88 & 847.22 & 205.75 & 0.03 s / 1 core & A. Wong, S. Cicek and S. Soatto: Learning topology from synthetic data for unsupervised depth completion. IEEE Robotics and Automation Letters 2021.\\
NConv-CNN-L1 (gd) & & 2.52 & 0.92 & 859.22 & 207.77 & 0.02 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Confidence propagation through cnns for guided sparse depth regression. IEEE transactions on pattern analysis and machine intelligence 2019.\\
GCANet-acc+NLSPN & & 3.18 & 1.21 & 885.28 & 259.49 & 0.088s / & \\
IR\_L2 & & 4.92 & 1.35 & 901.43 & 292.36 & 0.05 s / GPU & K. Lu, N. Barnes, S. Anwar and L. Zheng: From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2020.\\
HFFNet(depth-only) & & 2.61 & 1.02 & 913.16 & 238.55 & 0.1 s / 1 core & \\
Spade-RGBsD & & 2.17 & 0.95 & 917.64 & 234.81 & 0.07 s / GPU & M. Jaritz, R. Charette, E. Wirbel, X. Perrotton and F. Nashashibi: Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. International Conference on 3D Vision (3DV) 2018.\\
HFFNet(d1) & & 2.78 & 1.21 & 918.91 & 273.91 & 0.1 s / 1 core & \\
glob\_guide&certainty & & 2.80 & 1.07 & 922.93 & 249.11 & 0.02 s / GPU & W. Van Gansbeke, D. Neven, B. De Brabandere and L. Van Gool: Sparse and noisy LiDAR completion with RGB guidance and uncertainty. International Conference on Machine Vision Applications (MVA) 2019.\\
DesNet & & 2.95 & 1.13 & 938.45 & 266.24 & 0.01 s / GPU & Z. Yan, K. Wang, X. Li, Z. Zhang, J. Li and J. Yang: Desnet: Decomposed Scale-Consistent Network for Unsupervised Depth Completion. AAAI (oral) 2023.\\
DFineNet & & 3.21 & 1.39 & 943.89 & 304.17 & 0.02 s / GPU & Y. Zhang, T. Nguyen, I. Miller, S. Shivakumar, S. Chen, C. Taylor and V. Kumar: DFineNet: Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Input with RGB Guidance. CoRR 2019.\\
Sparse-to-Dense (d) & & 3.21 & 1.35 & 954.36 & 288.64 & 0.04 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
pNCNN (d) & & 3.37 & 1.05 & 960.05 & 251.77 & 0.02 s / 1 core & A. Eldesokey, M. Felsberg, K. Holmquist and M. Persson: Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020.\\
Conf-Net & & 3.10 & 1.09 & 962.28 & 257.54 & 0.02 s / GPU & H. Hekmatian, S. Al-Stouhi and J. Jin: Conf-Net: Predicting Depth Completion Error-Map For High-Confidence Dense 3D Point- Cloud. 2019.\\
DCrgb\_80b\_3coef & & 2.43 & 0.98 & 965.87 & 215.75 & 0.15 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
DCd\_all & & 2.87 & 1.13 & 988.38 & 252.21 & 0.1 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
LW-DepthNet & & 2.99 & 1.09 & 991.88 & 261.67 & 0.09 s / GPU & L. Bai, Y. Zhao, M. Elhousni and X. Huang: DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles. arXiv preprint arXiv:2007.02438 2020.\\
CSPN & & 2.93 & 1.15 & 1019.64 & 279.46 & 1 s / GPU & X. Cheng, P. Wang and R. Yang: Depth estimation via affinity learned with convolutional spatial propagation network. Proceedings of the European Conference on Computer Vision (ECCV) 2018.X. Cheng, P. Wang and R. Yang: Learning Depth with Convolutional Spatial Propagation Network. arXiv preprint arXiv:1810.02695 2018.\\
Spade-sD & & 2.60 & 0.98 & 1035.29 & 248.32 & 0.04 s / GPU & M. Jaritz, R. Charette, E. Wirbel, X. Perrotton and F. Nashashibi: Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. International Conference on 3D Vision (3DV) 2018.\\
Morph-Net & & 3.84 & 1.57 & 1045.45 & 310.49 & 0.17 s / GPU & M. Dimitrievski, P. Veelaert and W. Philips: Learning morphological operators for depth completion. Advanced Concepts for Intelligent Vision Systems 2018.\\
SynthProjV & & 3.12 & 1.13 & 1062.48 & 268.37 & 0.1 s / 1 core & A. Lopez-Rodriguez, B. Busam and K. Mikolajczyk: Project to Adapt: Domain Adaptation for Depth Completion from Noisy and Sparse Sensor Data. Asian Conference on Computer Vision (ACCV) 2020.\\
KBNet & & 2.95 & 1.02 & 1069.47 & 256.76 & 0.01 s / 1 core & A. Wong and S. Soatto: Unsupervised Depth Completion with Calibrated Backprojection Layers. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2021.\\
test4 & & 3.45 & 1.18 & 1069.88 & 272.55 & 0.1 s / 1 core & \\
test3 & & 3.48 & 1.19 & 1074.53 & 280.93 & 0.1 s / 1 core & \\
VLW-DepthNet & & 3.43 & 1.21 & 1077.22 & 282.02 & 0.09 / GPU & L. Bai, Y. Zhao, M. Elhousni and X. Huang: DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles. arXiv preprint arXiv:2007.02438 2020.\\
test2 & & 3.47 & 1.19 & 1080.70 & 278.99 & 0.1 s / 1 core & \\
test & & 3.56 & 1.18 & 1087.04 & 275.71 & 0.1 s / 1 core & \\
SynthProj & & 3.53 & 1.19 & 1095.26 & 280.42 & 0.1 s / 1 core & A. Lopez-Rodriguez, B. Busam and K. Mikolajczyk: Project to Adapt: Domain Adaptation for Depth Completion from Noisy and Sparse Sensor Data. Asian Conference on Computer Vision (ACCV) 2020.\\
DCd\_3 & & 2.95 & 1.07 & 1109.04 & 234.01 & 0.1 s / 1 core & S. Imran, Y. Long, X. Liu and D. Morris: Depth coefficients for depth completion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019.\\
ScaffFusion & & 3.32 & 1.17 & 1121.89 & 282.86 & 0.03 s / 1 core & A. Wong, S. Cicek and S. Soatto: Learning topology from synthetic data for unsupervised depth completion. IEEE Robotics and Automation Letters 2021.\\
AdaFrame-VGG8 & & 3.32 & 1.16 & 1125.67 & 291.62 & 0.02 s / GPU & A. Wong, X. Fei, B. Hong and S. Soatto: An Adaptive Framework for Learning Unsupervised Depth Completion. IEEE Robotics and Automation Letters 2021.\\
VOICED & & 3.56 & 1.20 & 1169.97 & 299.41 & 0.02 s / 1 core & A. Wong, X. Fei, S. Tsuei and S. Soatto: Unsupervised Depth Completion from Visual Inertial Odometry. IEEE Robotics and Automation Letters 2020.\\
DFuseNet & & 3.62 & 1.79 & 1206.66 & 429.93 & 0.08 s / GPU & S. Shivakumar, T. Nguyen, S. Chen and C. Taylor: DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion. arXiv preprint arXiv:1902.00761 2019.\\
NonLearning Complete & & 3.63 & 1.23 & 1222.00 & 303.82 & 0.84 s / 1 core & B. Krauss, G. Schroeder, M. Gustke and A. Hussein: Deterministic Guided LiDAR Depth Map Completion. 2021 IEEE Intelligent Vehicles Symposium (IV) 2021.\\
PDC & & 3.89 & 1.26 & 1227.96 & 288.55 & 10 s / 1 core & D. Teutscher, P. Mangat and O. Wasenmüller: PDC: Piecewise Depth Completion utilizing Superpixels. IEEE International Conference on Intelligent Transportation (ITSC) 2021.\\
Physical\_Surface\_Mod & & 3.76 & 1.21 & 1239.84 & 298.30 & 0.06 s / 1 core & Y. Zhao, L. Bai, Z. Zhang and X. Huang: A Surface Geometry Model for LiDAR Depth Completion. IEEE Robotics and Automation Letters 2021.\\
NG\_Depth & & 14.93 & 1.38 & 1266.22 & 305.98 & 0.8 s / 1 core & P. An, Y. Gao, W. Fu, J. Ma, B. Fang and K. Yu: Lambertian Model Based Normal Guided Depth Completion for LiDAR-Camera System. IEEE GRSL 2021.\\
NConv-CNN (d) & & 4.67 & 1.52 & 1268.22 & 360.28 & 0.01 s / GPU & A. Eldesokey, M. Felsberg and F. Khan: Propagating Confidences through CNNs for Sparse Data Regression. 2018.\\
IP-Basic & & 3.78 & 1.29 & 1288.46 & 302.60 & 0.011 s / 1 core & J. Ku, A. Harakeh and S. Waslander: In Defense of Classical Image Processing: Fast Depth Completion on the CPU. 2018 15th Conference on Computer and Robot Vision (CRV) 2018.\\
Sparse2Dense(w/o gt) & & 4.07 & 1.57 & 1299.85 & 350.32 & 0.08 s / GPU & F. Ma, G. Cavalheiro and S. Karaman: Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera. 2019 IEEE International Conference on Robotics and Automation (ICRA) 2019.\\
ADNN & & 59.39 & 3.19 & 1325.37 & 439.48 & .04 s / GPU & S. Nathaniel Chodosh: Deep Convolutional Compressed Sensing for LiDAR Depth Completion. Asian Conference on Computer Vision (ACCV) 2018.\\
NN+CNN & & 3.25 & 1.29 & 1419.75 & 416.14 & 0.02 s / & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
B-ADT & & 4.16 & 1.23 & 1480.36 & 298.72 & 0.120 sec. / & Y. Yao, M. Roxas, R. Ishikawa, S. Ando, j. shimamura and T. Oishi: Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor. IEEE Robotics and Automation Letters 2020.\\
SparseConvs & & 4.94 & 1.78 & 1601.33 & 481.27 & 0.01 s / & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
NadarayaW & & 6.34 & 1.84 & 1852.60 & 416.77 & 0.05 s / 1 core & J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.\\
SGDU & & 7.38 & 2.05 & 2312.57 & 605.47 & 0.2 s / 4 cores & N. Schneider, L. Schneider, P. Pinggera, U. Franke, M. Pollefeys and C. Stiller: Semantically Guided Depth Upsampling. German Conference on Pattern Recognition 2016.
\end{tabular}