Method

[la]LiCar: Lidar Point Cloud Based on Real-time Vehicle Detection [LiCar]


Submitted on 12 Feb. 2018 16:31 by
qiankun tang (The Institute of Computing Technology of the Chinese Academy of Sciences)

Running time:0.09 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
The 3D point clouds are projected to ground and
gridded, with the attributes of the point cloud are
encoded into the grid. After that, the
representation of 3D point clouds was fed into a
deep neural network. To prompt detection accuracy,
we construct a position-and-orientation-sensitive
feature map to help regressing vehicles’ pose and
size.
Parameters:
weight decay: 0.0005
momentum: 0.9
base_lr: 0.001
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 50.23 % 40.05 % 41.80 %
Car (Orientation) 24.40 % 19.16 % 20.80 %
Car (3D Detection) 23.90 % 21.92 % 20.31 %
Car (Bird's Eye View) 55.80 % 42.93 % 44.79 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



Orientation estimation results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker