Method

Discrete-Continuous Optimization with Exclusion Constraints [DCO-X*]
http://research.milanton.net/dctracking/

Submitted on 5 Dec. 2014 06:38 by
Anton Milan (University of Adelaide)

Running time:0.9 s
Environment:1 core @ >3.5 Ghz (Matlab + C/C++)

Method Description:
When tracking multiple targets in crowded
scenarios, modeling mutual exclusion between
distinct targets becomes important at two
levels: (1) in data association, each
target observation should support at most one
trajectory and each trajectory should be
assigned at most one observation per frame; (2)
in trajectory estimation, two trajectories
should remain spatially separated at all times
to avoid collisions. Yet, existing trackers
often sidestep these important constraints. We
address this using a mixed discrete-continuous
conditional random field (CRF) that explicitly
models both types of constraints: Exclusion
between conflicting observations with
supermodular pairwise terms, and exclusion
between trajectories by generalizing
global label costs to suppress the co-
occurrence of incompatible labels
(trajectories). We develop an expansion
move-based MAP estimation scheme that handles
both non-submodular constraints and pairwise
global label costs. Furthermore, we perform a
statistical analysis of ground-truth
trajectories to derive appropriate CRF
potentials for modeling data fidelity, target
dynamics, and inter-target occlusion.
Parameters:
Using Regionlets Detections

outlierCost=995.0662
labelCost=2390.744
unaryFactor=46.6913
persistenceFactor=1.1155
curvatureFactor=0
slopeFactor=0.002317
proxcostFactor=755.4725
exclusionFactor=244.6583
pairwiseFactor=1.5128
Latex Bibtex:
@inproceedings{Milan2013CVPR,
Author = {Anton Milan and Konrad Schindler
and Stefan Roth},
Booktitle = {CVPR},
Title = {Detection- and Trajectory-Level
Exclusion in Multiple Object Tracking},
Year = {2013}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 68.11 % 78.85 % 69.03 % 83.47 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 78.67 % 91.99 % 84.81 % 29740 2588 8063 23.27 % 36607 1530

Benchmark MT PT ML IDS FRAG
CAR 37.54 % 48.31 % 14.15 % 318 959

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker