Method

EgoNet (monocular RGB only) [EgoNet]
https://github.com/Nicholasli1995/EgoNet

Submitted on 29 Mar. 2021 13:22 by
Shichao Li (Hong Kong University of Science and Technology)

Running time:0.1 s
Environment:GPU @ 1.5 Ghz (Python)

Method Description:
This method estimates egocentric vehicle
orientation from a single RGB image using
intermediate geometric representations. Only the
training split (3,682 images) is used during
training and no additional labels in any form is
used.
Parameters:
\confidence_threshold=0.1
Latex Bibtex:
@InProceedings{Li_2021_CVPR,
author = {Li, Shichao and Yan, Zengqiang and Li,
Hongyang and Cheng, Kwang-Ting},
title = {Exploring intermediate representation
for monocular vehicle pose estimation},
booktitle = {The IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 96.18 % 91.39 % 81.33 %
Car (Orientation) 96.11 % 91.23 % 80.96 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



Orientation estimation results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker