AVG
Autonomous
Vision Group

LIBOMNICAL: Omnidirectional Camera Calibration

LIBOMNICAL provides a MATLAB Toolbox to calibrate central and slightly non-central catadioptric cameras and catadioptric stereo setups. This is a reference implementation of the centered projection model presented in Calibrating and Centering Quasi-Central Catadioptric Cameras (ICRA 2014) as well as a number of central reference models. The calibration consists of an automatic corner extraction stage, a minimization part and the visualization of the calibration results. The toolbox contains demo images to run a quick demo calibration of a slightly non-central catadioptric stereo camera setup, where the corners of the images are already extracted. After installation of the library the calibration runs using a single command as described in the readme file. All results are saved to a calibration folder as illustrated below. Besides the calibration result, the folder will contain the calibration images with the detected and estimated checkerboard positions (left), a visualization of the reprojection error for all checkerboards of each camera (middle) and the reconstructed 3D positions of the calibration patterns (right).


Omnidirectional Stereo

LIBOMNISTEREO provides a MATLAB reference implementation of our omnidirectional 3D reconstruction method for augmented Manhattan worlds, presented in Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds (IROS 2014). In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. We show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. The figure below illustrates our results in two different scenarios:


Datasets

We also provide a high-resolution (1400x1400 Pixels) Omnidirectional dataset (12607 frames) captured with a stereo pair of catadioptric cameras, including calibration parameters, OXTS inertial/GPS measurement data and Velodyne laser scans. This video shows the first part of the sequence:

Downloads

Code / datasets are published under the GNU General Public License / Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

Citation

@INPROCEEDINGS{Schoenbein2014ICRA,
  author = {Miriam Schönbein and Tobias Strauss and Andreas Geiger},
  title = {Calibrating and Centering Quasi-Central Catadioptric Cameras},
  booktitle = {International Conference on Robotics and Automation (ICRA)},
  year = {2014}
}
@INPROCEEDINGS{Schoenbein2014IROS,
  author = {Miriam Schönbein and Andreas Geiger},
  title = {Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds},
  booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
  year = {2014}
}


eXTReMe Tracker