Semantic Segmentation Benchmark



This is the KITTI semantic segmentation benchmark. It consists of 200 semantically annotated train as well as 200 test images. The data format and metrics are conform with The Cityscapes Dataset.

The data can be downloaded here:


Note: On 12.04.2018 we have fixed several annotation errors in the dataset, please download the dataset again if you have an old version.


Our evaluation table ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Like in cityscapes we also use an instance-level intersection over union iIoU = iTP/(iTP+FP+iFN). In contrast to the standard IoU measure, iTP and iFN are computed by weighting the contribution of each pixel by the ratio of the class’ average instance size to the size of the respective ground truth instance.

  • IoU class:  Intersection over Union for each class IoU=TP/(TP+FP+FN)
  • iIoU class:    Instance Intersection over Union iIoU=iTP/(iTP+FP+iFN)
  • IoU category:   Intersection over Union for each category IoU=TP/(TP+FP+FN)
  • iIoU category:     Instance Intersection over Union for each category iIoU=iTP/(iTP+FP+iFN)


Additional information used by the methods
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • Depth: Method uses depth from stereo.
  • Video: Method uses 2 or more temporally adjacent images
  • Additional training data: Use of additional data sources for training (see details)


Method Setting Code IoU class iIoU class IoU category iIoU category Runtime Environment
1 VENUS_ROB code 56.32 25.96 79.06 60.02 0.5 s 8 cores @ 2.5 Ghz (Python)
2 GoogLeNetV1_ROB 45.29 18.75 74.44 47.61 0.05 s 1 core @ 2.5 Ghz (C/C++)
3 GoogLeV1_CS 43.63 16.40 71.83 41.63 0.03 ms Titan Xp
Table as LaTeX | Only published Methods




Related Datasets

  • The Cityscapes Dataset: The cityscapes dataset was recorded in 50 German cities and offers high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
  • Wilddash: Wilddash is a benchmark for semantic and instance segmentation. It aims to improve the expressiveness of performance evaluation for computer vision algorithms in regard to their robustness under real-world conditions.

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Alhaija2017BMVC,
  author = {Hassan Abu Alhaija and Siva Karthik Mustikovela and Lars Mescheder and Andreas Geiger and Carsten Rother},
  title = {Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes},
  booktitle = {British Machine Vision Conference (BMVC)},
  year = {2017}
}



eXTReMe Tracker