Method

aUToTrack [la] [gp] [on] [aUToTrack]


Submitted on 12 Feb. 2019 03:34 by
Keenan Burnett (University of Toronto)

Running time:0.01 s
Environment:1 core @ >3.5 Ghz (C/C++)

Method Description:
The output of a vision-based CNN detector is used to cluster LIDAR points in order to obtain measurements for the 3D position of objects. Using GPS/IMU data, these objects are then tracked in 3D using an EKF.
Parameters:
N/A
Latex Bibtex:
@article{Burnett2019,
author = {Keenan Burnett and
Sepehr Samavi and
Steven L. Waslander and
Timothy D. Barfoot and
Angela P. Schoellig},
title = {aUToTrack: {A} Lightweight Object Detection and Tracking System for
the {SAE} AutoDrive Challenge},
journal = {arXiv:1905.08758},
year = {2019},
url = {http://arxiv.org/abs/1905.08758},
archivePrefix = {arXiv},
eprint = {1905.08758},
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 82.25 % 80.52 % 85.23 % 84.22 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 89.36 % 97.03 % 93.04 % 33921 1040 4038 9.35 % 39367 3568

Benchmark MT PT ML IDS FRAG
CAR 72.62 % 23.85 % 3.54 % 1025 1402

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker