Method

Multi-View 3D Object Detection Network (LIDAR) [la] [MV3D (LIDAR)]


Submitted on 24 Jul. 2017 13:01 by
David Stutz (Max Planck Institute for Intelligent Systems)

Running time:0.24 s
Environment:GPU @ 2.5 Ghz (Python + C/C++)

Method Description:
https://arxiv.org/abs/1611.07759

BV+FV network, only LIDAR data is used.

Orginally submitted by Xiaozhi Chen
(https://xiaozhichen.github.io/, Tsinghua
University).
Parameters:
TBD
Latex Bibtex:

@inproceedings{Chen2017CVPR,
title = {Multi-View 3D Object Detection Network for
Autonomous
Driving},
author = {Xiaozhi Chen and Huimin Ma and Ji Wan and
Bo Li and Tian Xia},
booktitle = {CVPR},
year = {2017}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (3D Detection) 68.35 % 54.54 % 49.16 %
Car (Bird's Eye View) 86.49 % 78.98 % 72.23 %
This table as LaTeX


3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker