Method

Local-Global Motion Tracker [LGM]


Submitted on 19 Sep. 2021 12:33 by
Wang Gaoang (Zhejiang University)

Running time:0.08 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
The method tackles the association issue for long-
term tracking with the exclusive fully-exploited
motion information. We address the tracklet
embedding issue with the proposed reconstruct-to-
embed strategy based on deep graph convolutional
neural networks (GCN)
Parameters:
See manuscript
Latex Bibtex:
@inproceedings{wang2021track,
title={Track without Appearance: Learn Box and
Tracklet Embedding with Local and Global Motion
Patterns for Vehicle Tracking},
author={Wang, Gaoang and Gu, Renshu and Liu,
Zuozhu and Hu, Weijie and Song, Mingli and Hwang,
Jenq-Neng},
booktitle={Proceedings of the IEEE/CVF
International Conference on Computer Vision
(ICCV)},
pages={9876--9886},
year={2021}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 88.06 % 84.16 % 89.43 % 87.07 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 94.52 % 96.18 % 95.34 % 37186 1478 2158 13.29 % 48733 1059

Benchmark MT PT ML IDS FRAG
CAR 85.54 % 12.31 % 2.15 % 469 590

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker