Method

End-to-End Multiple-Object 001 001 Tracking Method with YOLO and Decoder [MO-YOLO]
https://github.com/liaopan-lp/MO-YOLO

Submitted on 21 Mar. 2024 06:27 by
Pan Liao (西北工业大学)

Running time:0.024 s
Environment:2080ti (Python)

Method Description:
Drawing insights from successful models such as
GPT, our proposed MO-YOLO stands out as an
efficient and computationally frugal end-to-end
MOT solution. MO-YOLO integrates principles from
YOLO and RT-DETR, adopting a decoder-centric
architecture alongside other complementary
structures.
Parameters:
45M
Latex Bibtex:
@article{pan2023mo,
title={MO-YOLO: End-to-End Multiple-Object
Tracking Method with YOLO and MOTR},
author={Pan, Liao and Feng, Yang and Di, Wu and
Bo, Liu and Xingle, Zhang},
journal={arXiv preprint arXiv:2310.17170},
year={2023}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 83.55 % 84.61 % 84.28 % 86.99 %
PEDESTRIAN 55.71 % 73.93 % 56.23 % 92.59 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 89.21 % 96.69 % 92.80 % 34824 1192 4213 10.72 % 44621 997
PEDESTRIAN 65.50 % 88.07 % 75.13 % 15302 2073 8059 18.64 % 21369 361

Benchmark MT PT ML IDS FRAG
CAR 72.00 % 22.77 % 5.23 % 252 569
PEDESTRIAN 34.02 % 30.58 % 35.40 % 121 797

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker