Method

A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association[st][la] [on] [DeepFusion-MOT]
https://github.com/wangxiyang2022/DeepFusionMOT

Submitted on 19 Nov. 2021 04:50 by
Xiyang Wang (Chongqing University (CQU SLAMMOT Team))

Running time:0.01 s
Environment:>8 cores @ 2.5 Ghz (Python)

Method Description:
This paper proposes a robust and fast camera-LiDAR
fusion-based MOT method that achieves a good
trade-off between accuracy and speed. Relying on
the characteristics of camera and LiDAR sensors,
an effective deep association mechanism is
designed and embedded in the proposed MOT method.
This association mechanism realizes tracking of an
object in a 2D domain when the object is far away
and only detected by the camera, and updating of
the 2D trajectory with 3D information obtained
when the object appears in the LiDAR field of view
to achieve a smooth fusion of 2D and 3D
trajectories. Extensive experiments based on the
KITTI dataset indicate that our proposed method
presents obvious advantages over the state-of-the-
art MOT methods in terms of both tracking accuracy
and processing speed. Code
available:https://github.com/wangxiyang2022/DeepFu
sionMOT
Parameters:
See the code for details.
Latex Bibtex:
@ARTICLE{9810346, author={Wang, Xiyang and Fu,
Chunyun and Li, Zhankun and Lai, Ying and He,
Jiawei}, journal={IEEE Robotics and Automation
Letters}, title={DeepFusionMOT: A 3D Multi-Object
Tracking Framework Based on Camera-LiDAR Fusion with
Deep Association}, year={2022}, volume={},
number={}, pages={1-8}, doi=
{10.1109/LRA.2022.3187264}}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 84.80 % 85.10 % 84.90 % 88.17 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 87.94 % 98.25 % 92.81 % 33532 597 4597 5.37 % 37694 1493

Benchmark MT PT ML IDS FRAG
CAR 68.46 % 22.46 % 9.08 % 35 444

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker