Method

StrongFusion-MOT [StrongFusion-MOT]


Submitted on 28 Jul. 2022 05:43 by
Xiyang Wang (Chongqing University (CQU SLAMMOT Team))

Running time:0.01 s
Environment:>8 cores @ 2.5 Ghz (Python + C/C++)

Method Description:
StrongFusion-MOT
Parameters:
TBD
Latex Bibtex:
@ARTICLE{9976946,
author={Wang, Xiyang and Fu, Chunyun and He,
Jiawei and Wang, Sujuan and Wang, Jianwen},
journal={IEEE Sensors Journal},
title={StrongFusionMOT: A Multi-Object Tracking
Method Based on LiDAR-Camera Fusion},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/JSEN.2022.3226490}}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
PEDESTRIAN 39.14 % 64.22 % 40.18 % 88.22 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
PEDESTRIAN 62.82 % 74.04 % 67.97 % 14694 5151 8698 46.31 % 24779 1053

Benchmark MT PT ML IDS FRAG
PEDESTRIAN 26.12 % 51.89 % 21.99 % 241 1467

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker