Method

A Dynamic-Confidence 3D MOT Framework based on Spatial-Temporal Association [la] [on] [STMOT_v1]


Submitted on 18 May. 2023 05:19 by
Ruihao Zeng (TransportLab, University of Sydney)

Running time:0.01 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
online tracker
CPU-only
FPS ~311
Parameters:
TBD
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 89.90 % 87.02 % 90.61 % 89.73 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 92.78 % 98.94 % 95.76 % 36484 392 2838 3.52 % 40684 1039

Benchmark MT PT ML IDS FRAG
CAR 81.08 % 9.69 % 9.23 % 244 271

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker