Andreas Geiger

Publications of Axel Sauer

Counterfactual Generative Networks
A. Sauer and A. Geiger
International Conference on Learning Representations (ICLR), 2021
Abstract: Neural networks are prone to learning shortcuts -they often model simple correlations, ignoring more complex ones that potentially generalize better. Prior works on image classification show that instead of learning a connection to object shape, deep classifiers tend to exploit spurious correlations with low-level texture or the background for solving the classification task. In this work, we take a step towards more robust and interpretable classifiers that explicitly expose the task's causal structure. Building on current advances in deep generative modeling, we propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision. By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background; hence, they allow for generating counterfactual images. We demonstrate the ability of our model to generate such images on MNIST and ImageNet. Further, we show that the counterfactual images can improve out-of-distribution robustness with a marginal drop in performance on the original classification task, despite being synthetic. Lastly, our generative model can be trained efficiently on a single GPU, exploiting common pre-trained models as inductive biases.
Latex Bibtex Citation:
@INPROCEEDINGS{Sauer2021ICLR,
  author = {Axel Sauer and Andreas Geiger},
  title = {Counterfactual Generative Networks},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year = {2021}
}
Conditional Affordance Learning for Driving in Urban Environments (oral)
A. Sauer, N. Savinov and A. Geiger
Conference on Robot Learning (CoRL), 2018
Abstract: Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 \% in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights, speed signs and smooth car-following, resulting in a significant reduction of traffic accidents.
Latex Bibtex Citation:
@INPROCEEDINGS{Sauer2018CORL,
  author = {Axel Sauer and Nikolay Savinov and Andreas Geiger},
  title = {Conditional Affordance Learning for Driving in Urban Environments},
  booktitle = {Conference on Robot Learning (CoRL)},
  year = {2018}
}


eXTReMe Tracker