Method

Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving [la] [on] [JMODT]
https://github.com/Kemo-Huang/JMODT

Submitted on 3 Mar. 2021 08:56 by
Kemiao Huang (Southern University of Science and Technology)

Running time:0.01 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
Our MOT system performs online joint object
detection and tracking, robust affinity computation
and comprehensive data association.
Parameters:
See the code for details.
Latex Bibtex:
@inproceedings{huang2021joint,
title={Joint multi-object detection and tracking
with camera-LiDAR fusion for autonomous driving},
author={Huang, Kemiao and Hao, Qi},
booktitle={2021 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS)},
pages={6983--6989},
year={2021},
organization={IEEE}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 86.27 % 85.41 % 86.40 % 88.32 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 91.26 % 96.65 % 93.88 % 35857 1244 3433 11.18 % 42732 1803

Benchmark MT PT ML IDS FRAG
CAR 77.38 % 19.69 % 2.92 % 45 585

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker