Andreas Geiger

Publications of Stefano Esposito

Volumetric Surfaces: Representing Fuzzy Geometries with Layered Meshes
S. Esposito, A. Chen, C. Reiser, S. Bulò, L. Porzi, K. Schwarz, C. Richardt, M. Zollhöfer, P. Kontschieder and A. Geiger
Conference on Computer Vision and Pattern Recognition (CVPR), 2025
Abstract: High-quality view synthesis relies on volume rendering, splatting, or surface rendering. While surface rendering is typically the fastest, it struggles to accurately model fuzzy geometry like hair. In turn, alpha-blending techniques excel at representing fuzzy materials but require an unbounded number of samples per ray (P1). Further overheads are induced by empty space skipping in volume rendering (P2) and sorting input primitives in splatting (P3). We present a novel representation for real-time view synthesis where the (P1) number of sampling locations is small and bounded, (P2) sampling locations are efficiently found via rasterization, and (P3) rendering is sorting-free. We achieve this by representing objects as semi-transparent multi-layer meshes rendered in a fixed order. First, we model surface layers as signed distance function (SDF) shells with optimal spacing learned during training. Then, we bake them as meshes and fit UV textures. Unlike single-surface methods, our multi-layer representation effectively models fuzzy objects. In contrast to volume and splatting-based methods, our approach enables real-time rendering on low-power laptops and smartphones.
Latex Bibtex Citation:
@inproceedings{Esposito2025CVPR,
  author = {Stefano Esposito and Anpei Chen and Christian Reiser and Samuel Rota Bulò and Lorenzo Porzi and Katja Schwarz and Christian Richardt and Michael Zollhöfer and Peter Kontschieder and Andreas Geiger},
  title = {Volumetric Surfaces: Representing Fuzzy Geometries with Layered Meshes},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2025}
}
LaRa: Efficient Large-Baseline Radiance Fields
A. Chen, H. Xu, S. Esposito, S. Tang and A. Geiger
European Conference on Computer Vision (ECCV), 2024
Abstract: Radiance field methods have achieved photorealistic novel view synthesis and geometry reconstruction. But they are mostly applied in per-scene optimization or small-baseline settings. While several recent works investigate feed-forward reconstruction with large baselines by utilizing transformers, they all operate with a standard global attention mechanism and hence ignore the local nature of 3D reconstruction. We propose a method that unifies local and global reasoning in transformer layers, resulting in improved quality and faster convergence. Our model represents scenes as Gaussian Volumes and combines this with an image encoder and Group Attention Layers for efficient feed-forward reconstruction. Experimental results demonstrate that our model, trained for two days on four GPUs, demonstrates high fidelity in reconstructing 360° radiance fields, and robustness to zero-shot and out-of-domain testing.
Latex Bibtex Citation:
@inproceedings{Chen2024ECCV,
  author = {Anpei Chen and Haofei Xu and Stefano Esposito and Siyu Tang and Andreas Geiger},
  title = {LaRa: Efficient Large-Baseline Radiance Fields},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2024}
}


eXTReMe Tracker