Andreas Geiger

Publications of Christian Reiser

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
C. Reiser, S. Garbin, P. Srinivasan, D. Verbin, R. Szeliski, B. Mildenhall, J. Barron, P. Hedman and A. Geiger
International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 2024
Abstract: While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric density field (e.g. NeRF) excel at reconstructing fine geometric detail. However, density fields often represent geometry in a "fuzzy" manner, which hinders exact localization of the surface. In this work, we modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures. First, we employ a discrete opacity grid representation instead of a continuous density field, which allows opacity values to discontinuously transition from zero to one at the surface. Second, we anti-alias by casting multiple rays per pixel, which allows occlusion boundaries and subpixel structures to be modelled without using semi-transparent voxels. Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training. Lastly, we develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting. The compact meshes produced by our model can be rendered in real-time on mobile devices and achieve significantly higher view synthesis quality compared to existing mesh-based approaches.
Latex Bibtex Citation:
@inproceedings{Reiser2024SIGGRAPH,
  author = {Christian Reiser and Stephan Garbin and Pratul P. Srinivasan and Dor Verbin and Richard Szeliski and Ben Mildenhall and Jonathan T. Barron and Peter Hedman and Andreas Geiger},
  title = {Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis},
  booktitle = {International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH)},
  year = {2024}
}
MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
C. Reiser, R. Szeliski, D. Verbin, P. Srinivasan, B. Mildenhall, A. Geiger, J. Barron and P. Hedman
International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 2023
Abstract: Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.
Latex Bibtex Citation:
@inproceedings{Reiser2023SIGGRAPH,
  author = {Christian Reiser and Richard Szeliski and Dor Verbin and Pratul P. Srinivasan and Ben Mildenhall and Andreas Geiger and Jonathan T. Barron and Peter Hedman},
  title = {MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes},
  booktitle = {International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH)},
  year = {2023}
}
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
C. Reiser, S. Peng, Y. Liao and A. Geiger
International Conference on Computer Vision (ICCV), 2021
Abstract: NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that significant speed-ups are possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by two orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality..
Latex Bibtex Citation:
@inproceedings{Reiser2021ICCV,
  author = {Christian Reiser and Songyou Peng and Yiyi Liao and Andreas Geiger},
  title = {KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year = {2021}
}
Learning Implicit Surface Light Fields
M. Oechsle, M. Niemeyer, C. Reiser, L. Mescheder, T. Strauss and A. Geiger
International Conference on 3D Vision (3DV), 2020
Abstract: Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.
Latex Bibtex Citation:
@inproceedings{Oechsle2020THREEDV,
  author = {Michael Oechsle and Michael Niemeyer and Christian Reiser and Lars Mescheder and Thilo Strauss and Andreas Geiger},
  title = {Learning Implicit Surface Light Fields},
  booktitle = {International Conference on 3D Vision (3DV)},
  year = {2020}
}


eXTReMe Tracker