KITTI-360: A large-scale dataset with 3D&2D annotations


Turn on your audio and enjoy our trailer!

About

We present a large-scale dataset that contains rich sensory information and full annotations. We recorded several suburbs of Karlsruhe, Germany, corresponding to over 320k images and 100k laser scans in a driving distance of 73.7km. We annotate both static and dynamic 3D scene elements with rough bounding primitives and transfer this information into the image domain, resulting in dense semantic & instance annotations for both 3D point clouds and 2D images.

  • Driving distance: 73.7 km, frames: 4x83,000
  • All frames accurately geolocalized (OpenStreetMap)
  • Semantic label definition consistent with Cityscapes, 19 classes for evaluation
  • Each instance assigned with a consistent instance ID across all frames

Sensors

For our data collection we equipped a station wagon with one 180° fisheye camera to each side and a 90° perspective stereo camera (baseline 60 cm) to the front. Furthermore, we mounted a Velodyne HDL-64E and a SICK LMS 200 laser scanning unit in pushbroom configuration on top of the roof. This setup is similar to the one used in KITTI, except that we gain a full 360° field of view due to the additional fisheye cameras and the pushbroom laser scanner while KITTI only provides perspective images and Velodyne laser scans with a 26.8° vertical field of view. In addition, our system is equipped with an IMU/GPS localization system.

Examples

Tools

We provide utility scripts for loading and inspecting the 2D and 3D labels:
https://github.com/autonomousvision/kitti360scripts
We also released our annotation tool that allows for labeling street scenes in 3D space:
https://github.com/autonomousvision/kitti360labeltool
[NEW] Many thanks to Clemens Mosig for providing the ROS publisher node for our dataset:
https://github.com/dcmlr/kitti360_ros_player

Copyright

All datasets and benchmarks on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. Per GDPR requirements, to download and to use the data you need to register and specify the intended purpose of using the dataset.

Citation

If you find our dataset useful, please cite the following paper:

Annoucements

December 29, 2022
Released 2D semantic labels and confidence maps of the right perspective camera (image_01).
November 1, 2021
Updated 2D & 3D semantic labels and confidence maps. Released train/val split.
September 14, 2021
Released all annotation bounding primitives including ground objects.
April 8, 2021
Released unrectified perspective images (see "download_2d_perspective_unrectified.sh" in our updated download file).
January 17, 2021
Released OXTS measurements and code for converting OXTS measurements to 6D poses in Euclidean space and vice versa.
November 3, 2020
Released timestamps of RGB images and laser scans. Please check our updated download scripts.
October 6, 2020
Training data and development kit released: https://github.com/autonomousvision/kitti360Scripts.

Organizations

Annotation Collaborator




eXTReMe Tracker