Method

Orthographic Feature Transform for Monocular 3D Object Detection [OFT-Net]
[Anonymous Submission]

Submitted on 10 Nov. 2018 15:20 by
[Anonymous Submission]

Running time:0.5 s
Environment:8 cores @ 2.5 Ghz (Python + C/C++)

Method Description:
Orthographic feature transform (OFT) is used to map
perspective image-based feature maps to an
orthographic ground plane representation
Parameters:
N/A
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (3D Detection) 3.28 % 2.50 % 2.27 %
Car (Bird's Eye View) 9.50 % 7.99 % 7.51 %
Pedestrian (3D Detection) 1.06 % 1.11 % 1.06 %
Pedestrian (Bird's Eye View) 1.93 % 1.55 % 1.65 %
Cyclist (3D Detection) 0.43 % 0.43 % 0.43 %
Cyclist (Bird's Eye View) 0.79 % 0.43 % 0.43 %
This table as LaTeX


3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker