Method

Point3DT [la] [Point3DT]


Submitted on 15 Feb. 2020 06:36 by
Wang Sukai (The Hong Kong University of Science and Technology)

Running time:0.05 s
Environment:1 core @ >3.5 Ghz (Python)

Method Description:
We propose PointTrackNet, an end-to-end 3-D
object detection and tracking network, to
generate foreground masks, 3-D bounding boxes,
and point-wise tracking association displacements
for each detected object. The network merely
takes as input two adjacent point-cloud frames.
Experimental results show competitive results in
the irregularly and rapidly changing scenarios.
Parameters:
Detailed in the paper.
Latex Bibtex:
@inproceedings{PointTrackNet,
title={PointTrackNet: An End-to-End Network for
3-D Object Detection and Tracking from Point
Clouds},
author={Sukai Wang, Yuxiang Sun, Chengju Liu, and
Ming Liu},
booktitle = {to be submitted ICRA'20}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 68.24 % 76.57 % 68.56 % 81.83 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 83.56 % 88.10 % 85.77 % 32587 4400 6413 39.55 % 42719 1245

Benchmark MT PT ML IDS FRAG
CAR 60.62 % 27.08 % 12.31 % 111 725

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker