Method

Multimodal vehicle detection [la] [Multimodal Detection]
https://github.com/alirezaasvadi/Multimodal

Submitted on 15 Jun. 2018 16:50 by
Alireza Asvadi ()

Running time:0.06 s
Environment:GPU @ 3.5 Ghz (Matlab + C/C++)

Method Description:
A multisensor (color camera and 3D-LIDAR) and
multimodal (color image, 3D-LIDAR's range and
reflectance data) vehicle detection system.
Three modalities, color image, dense (up-sampled)
representations of sparse 3D-LIDAR's range and
reflectance data are used as three individual
inputs to YOLOv2 real-time object detection
framework to achieve vehicle detection in each
modality in the form of Bounding Boxes, followed
by a decision-level fusion approach.
Parameters:
NA
Latex Bibtex:
@article{asvadi2017multimodal,
title={Multimodal vehicle detection: fusing 3D-
LIDAR and color camera data},
author={Asvadi, Alireza and Garrote, Luis and
Premebida, Cristiano and Peixoto, Paulo and
Nunes, Urbano J},
journal={Pattern Recognition Letters},
year={2017},
publisher={Elsevier}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 64.04 % 46.77 % 39.38 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker