Novel View Synthesis

Novel View Appearance Synthesis (90% Drop Rate)


We select 10 static scenes with a driving distance of ∼ 50 meters each for evaluating NVS at a 90% drop rate. We select one frame every ∼ 4.0 meters driving distance (corresponding to the overall average distance between frames) to avoid redundancy when the vehicle is slow. We release 50% of the frames for training and retain 50% for evaluation. Our evaluation table ranks all methods according to the peak signal-to-noise ratio (PSNR). We also evaluate structural similarity index (SSIM) and perceptual smilarity (LPIPS). Our evaluation table ranks all methods according to the peak signal-to-noise ratio (PSNR). We also evaluate structural similarity index (SSIM) and perceptual smilarity (LPIPS).

Method Setting Code PSNR SSIM LPIPS Runtime Environment
1 DGNerf code 17.33 0.714 0.397 1 s 1 core @ 2.5 Ghz (C/C++)
2 MVSRegNeRF 17.20 0.702 0.424 2 s 1 core @ 2.5 Ghz (C/C++)
F. Bian, S. Xiong, R. Yi and L. Ma: Multi-view stereo-regulated NeRF for urban scene novel view synthesis. The Visual Computer 2024.
3 NeRF
This method uses stereo information.
15.74 0.648 0.590 10 s 1 core @ 2.5 Ghz (C/C++)
B. Mildenhall, P. Srinivasan, M. Tancik, J. Barron, R. Ramamoorthi and R. Ng: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV 2020.
Table as LaTeX | Only published Methods





eXTReMe Tracker