Method

Multi-Modal Object Tracking with Pareto Neural Architecture Search [PNAS-MOT]
https://github.com/PholyPeng/PNAS-MOT

Submitted on 23 Mar. 2024 04:56 by
Peter Peng (Shanghai Jiao Tong University)

Running time:0.01 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
Neural Architecture Search for Multiple object
tracking
Parameters:
\alpha=0.2
Latex Bibtex:
@ARTICLE{peng2024pnasmot,
author={Peng, Chensheng and Zeng, Zhaoyu and
Gao, Jinling and Zhou, Jundong and Tomizuka,
Masayoshi and Wang, Xinbing and Zhou, Chenghu and
Ye, Nanyang},
journal={IEEE Robotics and Automation Letters},
title={PNAS-MOT: Multi-Modal Object Tracking
With Pareto Neural Architecture Search},
year={2024},
volume={},
number={},
pages={1-8},
doi={10.1109/LRA.2024.3379865}}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 67.32 % 77.69 % 58.99 % 81.58 % 85.81 % 64.70 % 80.74 % 86.94 %

Benchmark TP FP FN
CAR 32131 2261 568

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 89.59 % 85.44 % 91.77 % 751 75.99 %

Benchmark MT rate PT rate ML rate FRAG
CAR 86.46 % 11.08 % 2.46 % 276

Benchmark # Dets # Tracks
CAR 32699 1134

This table as LaTeX


This figure as: png pdf

[1] J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: HOTA: A Higher Order Metric for Evaluating Multi-object Tracking. IJCV 2020.
[2] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[3] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker