Method

You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking[st][la] [on] [YONTD-MOT]
https://github.com/wangxiyang2022/YONTD-MOT

Submitted on 29 Mar. 2023 10:15 by
Xiyang Wang (Chongqing University (CQU SLAMMOT Team))

Running time:0.1 s
Environment:GPU @ >3.5 Ghz (Python)

Method Description:
Firstly, a new multi-object tracking framework is
proposed in this paper based on multi-modal
fusion. By integrating object detection and multi-
object tracking into the same model, this
framework avoids the complex data association
process in the classical TBD paradigm, and
requires no additional training. Secondly,
confidence of historical trajectory regression is
explored, possible states of a trajectory in the
current frame (weak object or strong object) are
analyzed and a confidence fusion module is
designed to guide non-maximum suppression of
trajectory and detection for ordered association.
Finally, extensive experiments are conducted on
the KITTI and Waymo datasets. The results show
that the proposed method can achieve robust
tracking by using only two modal detectors and it
is more accurate than many of the latest TBD
paradigm-based multi-modal tracking methods. The
source codes of the proposed method are available
at https://github.com/wangxiyang2022/YONTD-MOT
Parameters:
TBD
Latex Bibtex:
@article{wang2023you,
title={You Only Need Two Detectors to Achieve
Multi-Modal 3D Multi-Object Tracking},
author={Wang, Xiyang and He, Jiawei and Fu,
Chunyun and Meng, Ting and Huang, Mingguang},
journal={arXiv preprint arXiv:2304.08709},
year={2023}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 85.19 % 87.10 % 85.25 % 89.62 %
PEDESTRIAN 28.93 % 65.99 % 30.68 % 88.98 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 89.76 % 96.66 % 93.08 % 34125 1181 3892 10.62 % 38435 795
PEDESTRIAN 43.65 % 77.72 % 55.90 % 10173 2916 13132 26.21 % 15431 718

Benchmark MT PT ML IDS FRAG
CAR 67.54 % 25.38 % 7.08 % 21 342
PEDESTRIAN 11.00 % 57.04 % 31.96 % 404 1697

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker