Method

StrongFusion-MOT [StrongFusion-MOT]


Submitted on 2 Jun. 2022 02:22 by
Xiyang Wang (Chongqing University (CQU SLAMMOT Team))

Running time:0.01 s
Environment:8 cores @ 2.5 Ghz (Python)

Method Description:
a stronger Lidar-Camera-fusion based MOT method.
Parameters:
TBD
Latex Bibtex:
@ARTICLE{9976946,
author={Wang, Xiyang and Fu, Chunyun and He,
Jiawei and Wang, Sujuan and Wang, Jianwen},
journal={IEEE Sensors Journal},
title={StrongFusionMOT: A Multi-Object Tracking
Method Based on LiDAR-Camera Fusion},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/JSEN.2022.3226490}}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 85.63 % 85.17 % 85.73 % 88.15 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 87.77 % 99.24 % 93.16 % 33408 255 4654 2.29 % 36461 836

Benchmark MT PT ML IDS FRAG
CAR 66.15 % 27.85 % 6.00 % 34 399

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker