Method

TrackR-CNN [TrackR-CNN]
https://github.com/VisualComputingInstitute/TrackR-CNN

Submitted on 4 Dec. 2019 19:54 by
Paul Voigtlaender (RWTH Aachen University)

Running time:0.5 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
Mask R-CNN with ReID head and 3D convs.
http://openaccess.thecvf.com/content_CVPR_2019/htm
l/Voigtlaender_MOTS_Multi-
Object_Tracking_and_Segmentation_CVPR_2019_paper.h
tml
Parameters:
Default (tuned on KITTI MOTS train set)
Latex Bibtex:
@inproceedings{Voigtlaender19CVPR_MOTS,
author = {Paul Voigtlaender and Michael Krause
and Aljo\u{s}a O\u{s}ep and Jonathon Luiten
and Berin Balachandar Gnana Sekar and
Andreas Geiger and Bastian Leibe},
title = {{MOTS}: Multi-Object Tracking and
Segmentation},
booktitle = {CVPR},
year = {2019},
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics (adapted for the segmentation case): CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark sMOTSA MOTSA MOTSP MODSA MODSP
CAR 67.00 % 79.60 % 85.10 % 81.50 % 88.30 %
PEDESTRIAN 47.30 % 66.10 % 74.60 % 68.40 % 91.80 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 85.10 % 96.00 % 90.20 % 31281 1310 5479 11.80 % 42100 843
PEDESTRIAN 74.10 % 92.90 % 82.40 % 15334 1179 5363 10.60 % 19711 416

Benchmark MT PT ML IDS FRAG
CAR 74.90 % 22.80 % 2.30 % 692 1058
PEDESTRIAN 45.60 % 41.10 % 13.30 % 481 861

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker