Method

MAE per-training 3D multiobject tracking for autonomous driving [la] [3DMAETracking]
[Anonymous Submission]

Submitted on 17 Jul. 2023 16:40 by
[Anonymous Submission]

Running time:34 s
Environment:>8 cores @ 2.5 Ghz (Python)

Method Description:
MAE Pre-training 3D multiobject tracking for on
LiDAR Point Clouds
Parameters:
None
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 83.20 % 85.31 % 83.45 % 88.39 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 86.81 % 98.24 % 92.18 % 33527 599 5093 5.38 % 36527 1209

Benchmark MT PT ML IDS FRAG
CAR 62.77 % 29.85 % 7.38 % 86 277

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker