Method

JRMOT [la] [on] [JRMOT]
https://github.com/StanfordVL/JRMOT_ROS

Submitted on 1 Mar. 2020 21:45 by
Mihir Patel (Stanford)

Running time:0.07 s
Environment:4 cores @ 2.5 Ghz (Python)

Method Description:
JRMOT is a novel 3D MOT system that integrates
information from 2D RGB images and 3D point
clouds into a real-time performing framework.
Our system leverages advancements in neural-
network based reidentification as well as 2D and
3D detection and descriptors. We incorporate
this into a joint probabilistic data-association
framework within a multi-modal recursive Kalman
architecture to achieve online, real-time 3D
MOT.
Parameters:
See paper.
Latex Bibtex:
@inproceedings{Shenoi2020JRMOTAR,
title = {JRMOT: A Real-Time 3D Multi-Object
Tracker and a New Large-Scale Dataset},
author = {Abhijeet Shenoi and Mihir Patel and
JunYoung Gwak and Patrick Goebel and Amir
Sadeghian and Hamid Rezatofighi and Roberto
Mart{\'i}n-Mart{\'i}n and Silvio Savarese},
year = {2020},
booktitle = {The {IEEE/RSJ} International
Conference on Intelligent Robots and Systems
({IROS})},
url={https://arxiv.org/abs/2002.08397}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 85.70 % 85.48 % 85.98 % 88.42 %
PEDESTRIAN 46.33 % 72.54 % 47.82 % 91.78 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 89.51 % 97.81 % 93.48 % 34556 772 4049 6.94 % 39939 1308
PEDESTRIAN 56.25 % 87.27 % 68.40 % 13076 1908 10172 17.15 % 16669 858

Benchmark MT PT ML IDS FRAG
CAR 71.85 % 24.15 % 4.00 % 98 372
PEDESTRIAN 23.37 % 47.77 % 28.87 % 345 1111

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker