Method

TopNet-DecayRate [la] [TopNet-DecayRate]


Submitted on 3 Nov. 2018 23:22 by
Sascha Wirges (Karlsruhe Institute of Technology)

Running time:92 ms
Environment:NVIDIA GeForce 1080 Ti (tensorflow-gpu)

Method Description:
Object detection by deep convolutional networks
consistently adapted to the multi-layer occupancy
grid domain. Only Velodyne used.
Parameters:
Meta architecture: Faster R-CNN;
Feature extractor: Resnet101;
Grid cell features: Intensity, min./max. z
coordinate, decay rate;
Grid cell size: 15cm;
Box encoding: position, length, width,
sin(2*angle)/cos(2*angle);
Latex Bibtex:
@article{Wirges2018,
abstract = {A detailed environment perception is
a crucial component of automated vehicles.
However, to deal with the amount of perceived
information, we also require segmentation
strategies. Based on a grid map environment
representation, well-suited for sensor fusion,
free-space estimation and machine learning, we
detect and classify objects using deep
convolutional neural networks. As input for our
networks we use a multi-layer grid map
efficiently encoding 3D range sensor information.
The inference output consists of a list of
rotated bounding boxes with associated semantic
classes. We conduct extensive ablation studies,
highlight important design considerations when
using grid maps and evaluate our models on the
KITTI Bird's Eye View benchmark. Qualitative and
quantitative benchmark results show that we
achieve robust detection and state of the art
accuracy solely using top-view grid maps from
range sensor data.},
archivePrefix = {arXiv},
arxivId = {1805.08689},
author = {Wirges, Sascha and Fischer, Tom and
Frias, Jesus Balado and Stiller, Christoph},
eprint = {1805.08689},
month = {may},
title = {{Object Detection and Classification in
Occupancy Grid Maps using Deep Convolutional
Networks}},
url = {http://arxiv.org/abs/1805.08689},
year = {2018}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 0.04 % 0.04 % 0.04 %
Car (Bird's Eye View) 79.76 % 64.12 % 56.48 %
Pedestrian (Detection) 0.02 % 0.04 % 0.05 %
Pedestrian (Bird's Eye View) 15.09 % 12.59 % 12.23 %
Cyclist (Detection) 0.04 % 1.01 % 1.01 %
Cyclist (Bird's Eye View) 28.06 % 19.92 % 19.13 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot



2D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot



2D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker