Method

A Dynamic-Confidence 3D MOT Framework based on Spatial-Temporal Association [la] [on] [STMOT_PointRCNN]


Submitted on 17 May. 2023 04:39 by
Ruihao Zeng (TransportLab, University of Sydney)

Running time:0.01 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
A dynamic tracker based on st-information
CPU-only
FPS ~311
Detector - PointRCNN
Parameters:
TBD
Latex Bibtex:
TBD

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 90.44 % 86.31 % 91.21 % 88.85 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 95.28 % 96.73 % 96.00 % 36272 1225 1797 11.01 % 43147 1107

Benchmark MT PT ML IDS FRAG
CAR 84.15 % 10.00 % 5.85 % 265 322

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker