Method

MSeg1080_RVC [MSeg1080_RVC]
https://github.com/mseg-dataset/mseg-semantic

Submitted on 29 Jul. 2020 16:31 by
John Lambert (Georgia Tech)

Running time:0.49 s
Environment:1 core @ 3.0 Ghz (Python)

Method Description:
Result is zero-shot, cross-dataset transfer (never
trained on KITTI).

We present MSeg, a composite dataset that unifies
semantic segmentation datasets from different
domains. A naive merge of the constituent datasets
yields poor performance due to inconsistent
taxonomies and annotation practices. We reconcile
the taxonomies and bring the pixel-level annotations
into alignment by relabeling more than 220,000
object masks in more than 80,000 images, requiring
more than 1.34 years of collective annotator effort.
The resulting composite dataset enables training a
single semantic segmentation model that functions
effectively across domains and generalizes to
datasets that were not seen during training. We
adopt zero-shot cross-dataset transfer as a bench-
mark to systematically evaluate a model’s robustness
and show that MSeg training yields substantially
more robust models in comparison to training on
individual datasets or naive mixing of datasets
without the presented contributions.
Parameters:
Trained for 3 million crops @ 1080p. Inference is at
360p.
Latex Bibtex:
@InProceedings{MSeg_2020_CVPR,
author = {Lambert, John and Liu, Zhuang and Sener,
Ozan and Hays, James and Koltun, Vladlen},
title = {{MSeg}: A Composite Dataset for Multi-
domain Semantic Segmentation},
booktitle = {Computer Vision and Pattern Recognition
(CVPR)},
year = {2020}
}

Detailed Results

This page provides detailed results for the method(s) selected. For the first 20 test images, we display the original image, the color-coded result and an error image. The error image contains 4 colors:
red: the pixel has the wrong label and the wrong category
yellow: the pixel has the wrong label but the correct category
green: the pixel has the correct label
black: the groundtruth label is not used for evaluation

Test Set Average

IoU class iIoU class IoU category iIoU category
62.64 31.62 86.59 68.05
This table as LaTeX

Test Image 0

Input Image

Prediction

Error


Test Image 1

Input Image

Prediction

Error


Test Image 2

Input Image

Prediction

Error


Test Image 3

Input Image

Prediction

Error


Test Image 4

Input Image

Prediction

Error


Test Image 5

Input Image

Prediction

Error


Test Image 6

Input Image

Prediction

Error


Test Image 7

Input Image

Prediction

Error


Test Image 8

Input Image

Prediction

Error


Test Image 9

Input Image

Prediction

Error




eXTReMe Tracker