Method

RoIFusion: 3D Object Detection from LiDAR and Vision [RoIFusion]
TBA

Submitted on 10 Sep. 2020 17:40 by
Can Chen (Cranfield University)

Running time:0.22 s
Environment:1 core @ 3.0 Ghz (Python)

Method Description:
We would like to leverage the advantages of LIDAR
and camera sensors by proposing a deep neural
network architecture for the fusion and the
efficient detection of 3D objects by identifying
their corresponding 3D bounding boxes with
orientation. In order to achieve this task,
instead of densely combining the point-wise
feature of the point cloud and the related pixel
features, we propose a novel fusion algorithm by
projecting a set of 3D Region of Interests (RoIs)
from the point clouds to the 2D RoIs of the
corresponding the images.
Parameters:
TBA
Latex Bibtex:
TBA

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 96.29 % 93.19 % 88.14 %
Car (3D Detection) 88.43 % 79.41 % 72.58 %
Car (Bird's Eye View) 92.90 % 89.06 % 83.96 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker