Method

CRF based LiDAR-Camera Fusion[la] [LC-CRF]


Submitted on 14 Sep. 2018 11:57 by
Shuo Gu (Nanjing University of Science and Technology)

Running time:0.18 s
Environment:GPU @ 1.5 Ghz (Python + C/C++)

Method Description:
LiDAR + Camera
Parameters:
TBA
Latex Bibtex:
@inproceedings{GuZTYK19,
author = {Shuo Gu and
Yigong Zhang and
Jinhui Tang and
Jian Yang and
Hui Kong},
title = {Road Detection through {CRF} based
LiDAR-Camera Fusion},
booktitle = {{ICRA}},
pages = {3832--3838},
publisher = {{IEEE}},
year = {2019}
}

Evaluation in Bird's Eye View


Benchmark MaxF AP PRE REC FPR FNR
UM_ROAD 94.91 % 86.41 % 91.92 % 98.11 % 3.93 % 1.89 %
UMM_ROAD 97.08 % 92.06 % 96.03 % 98.16 % 4.46 % 1.84 %
UU_ROAD 94.01 % 85.24 % 91.31 % 96.88 % 3.00 % 3.12 %
URBAN_ROAD 95.68 % 88.34 % 93.62 % 97.83 % 3.67 % 2.17 %
This table as LaTeX

Behavior Evaluation


Benchmark PRE-20 F1-20 HR-20 PRE-30 F1-30 HR-30 PRE-40 F1-40 HR-40
This table as LaTeX

Road/Lane Detection

The following plots show precision/recall curves for the bird's eye view evaluation.


Distance-dependent Behavior Evaluation

The following plots show the F1 score/Precision/Hitrate with respect to the longitudinal distance which has been used for evaluation.


Visualization of Results

The following images illustrate the performance of the method qualitatively on a couple of test images. We first show results in the perspective image, followed by evaluation in bird's eye view. Here, red denotes false negatives, blue areas correspond to false positives and green represents true positives.



This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png


eXTReMe Tracker