Method

Learning to Track with Object Permanence [on] [at] [PermaTrack]


Submitted on 18 Mar. 2021 05:14 by
Pavel Tokmakov (TRI)

Running time:0.1 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
We introduce an end-to-end trainable approach for joint object
detection and tracking. It is capable of localizing and associating
objects behind full occlusions. Our method is online, vision-based,
and does not use any heuristic post-processing steps.
Parameters:
See manuscript.
Latex Bibtex:
@inproceedings{tokmakov2021learning,
title={Learning to Track with Object Permanence},
author={Tokmakov, Pavel and Li, Jie and Burgard, Wolfram and
Gaidon, Adrien},
booktitle={ICCV},
year={2021}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 91.92 % 85.83 % 92.32 % 88.47 %
PEDESTRIAN 65.76 % 74.67 % 66.29 % 91.92 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 94.27 % 98.96 % 96.56 % 37033 389 2252 3.50 % 46490 1773
PEDESTRIAN 74.63 % 90.37 % 81.75 % 17474 1862 5941 16.74 % 23170 922

Benchmark MT PT ML IDS FRAG
CAR 86.77 % 10.92 % 2.31 % 138 345
PEDESTRIAN 49.14 % 35.74 % 15.12 % 124 792

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker