Semantic Instance Segmentation Evaluation



This is the KITTI semantic instance segmentation benchmark. It consists of 200 semantically annotated train as well as 200 test images corresponding to the KITTI Stereo and Flow Benchmark 2015. The data format and metrics are conform with The Cityscapes Dataset.

The data can be downloaded here:


The instance segmentation task focuses on detecting, segmenting and classifzing object instances. To assess instance-level performance, we compute the average precision on the region level (AP) for each class and average it across a range of overlap thresholds to avoid a bias towards a specific value. As described in The Cityscapes Dataset, we use 10 different overlaps ranging from 0.5 to 0.95 in steps of 0.05. The overlap is computed at the region level, making it equivalent to the IoU of a single instance. We penalize multiple predictions of the same ground truth instance as false positives. To obtain a single, easy to compare compound score, we report the mean average precision AP, obtained by also averaging over the class label set. As minor scores, we add AP50% for an overlap value of 50 %.

  • AP:  Average precision as described above.
  • AP 50%:    Average Precision with 50 % overlap.


Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • Depth: Method uses depth from stereo.
  • Video: Method uses 2 or more temporally adjacent images
  • Additional training data: Use of additional data sources for training (see details)

Method Setting Code AP AP50% Runtime Environment
1 DH-OCR 23.53 45.53 0.8 s 1 core @ 2.5 Ghz (C/C++)
2 MaskRCNN_Xs 21.38 42.30 0.5 s GPU @ 2.5 Ghz (Python)
3 MaskRCNN 20.46 39.14 1 s 1 core @ 2.5 Ghz (C/C++)
4 DH-OCR-V2 17.11 37.10 2 s 1 core @ 2.5 Ghz (C/C++)
5 NL_ROI_ROB 16.37 34.51 1 s GPU @ 1.5 Ghz (Python)
6 MRCNN++_VSCMLab_ROB 9.34 21.10 1 s GPU @ 2.5 Ghz (Python)
7 MRCNN_VSCMLab_ROB 8.90 21.79 1 s GPU @ 2.5 Ghz (Python)
8 lkl_net 8.05 22.88 0.15 s 1 core @ 2.5 Ghz (C/C++)
9 KC_MHT 7.49 21.58 0.01 s 1 core @ 2.5 Ghz (C/C++)
10 DB_KK 7.44 21.52 1 s 1 core @ 2.5 Ghz (C/C++)
11 DT_KDS 6.63 21.10 0.01 s 1 core @ 2.5 Ghz (C/C++)
12 Mask_rcnn 6.55 20.49 0.5 s 1 core @ 2.5 Ghz (C/C++)
13 DB_MR 6.02 16.29 0.01 s 1 core @ 2.5 Ghz (C/C++)
14 ST_BIN0 5.96 15.72 0.01 s 1 core @ 2.5 Ghz (C/C++)
15 ST_BIN 5.52 15.49 0.08 s 1 core @ 2.5 Ghz (C/C++)
16 lkl_net 5.35 14.43 1 s 1 core @ 2.5 Ghz (C/C++)
17 DB_KR 4.82 12.49 1 s 1 core @ 2.5 Ghz (C/C++)
18 BAMRCNN_ROB code 0.68 1.81 1 s 4 cores @ 2.5 Ghz (Python)
R. Girshick, I. Radosavovic, G. Gkioxari, P. Doll\'ar and K. He: Detectron. 2018.
19 DB_NN 0.00 0.00 0.6s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
Table as LaTeX | Only published Methods




Related Datasets

  • The Cityscapes Dataset: The cityscapes dataset was recorded in 50 German cities and offers high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
  • Wilddash: Wilddash is a benchmark for semantic and instance segmentation. It aims to improve the expressiveness of performance evaluation for computer vision algorithms in regard to their robustness under real-world conditions.

Citation

When using this dataset in your research, we will be happy if you cite us:
@ARTICLE{Alhaija2018IJCV,
  author = {Hassan Alhaija and Siva Mustikovela and Lars Mescheder and Andreas Geiger and Carsten Rother},
  title = {Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes},
  journal = {International Journal of Computer Vision (IJCV)},
  year = {2018}
}



eXTReMe Tracker