## Method

TVFNet[la] [TVFNet]

Submitted on 25 Feb. 2019 10:23 by
Shuo Gu (Nanjing University of Science and Technology)

 Running time: 0.04 s Environment: GPU @ 1.5 Ghz (Python)

 Method Description: LiDAR-only Parameters: TBA Latex Bibtex: @inproceedings{GuZYAK19, author = {Shuo Gu and Yigong Zhang and Jian Yang and Jose M. Alvarez and Hui Kong}, title = {Two-View Fusion based Convolutional Neural Network for Urban Road Detection}, booktitle = {{IROS}}, pages = {6144--6149}, publisher = {{IEEE}}, year = {2019} }

## Evaluation in Bird's Eye View

 Benchmark MaxF AP PRE REC FPR FNR UM_ROAD 94.96 % 89.17 % 94.95 % 94.97 % 2.30 % 5.03 % UMM_ROAD 96.47 % 93.16 % 97.24 % 95.71 % 2.98 % 4.29 % UU_ROAD 93.65 % 87.57 % 93.87 % 93.43 % 1.99 % 6.57 % URBAN_ROAD 95.34 % 90.26 % 95.73 % 94.94 % 2.33 % 5.06 %
This table as LaTeX

## Behavior Evaluation

 Benchmark PRE-20 F1-20 HR-20 PRE-30 F1-30 HR-30 PRE-40 F1-40 HR-40
This table as LaTeX

The following plots show precision/recall curves for the bird's eye view evaluation.

## Distance-dependent Behavior Evaluation

The following plots show the F1 score/Precision/Hitrate with respect to the longitudinal distance which has been used for evaluation.

## Visualization of Results

The following images illustrate the performance of the method qualitatively on a couple of test images. We first show results in the perspective image, followed by evaluation in bird's eye view. Here, red denotes false negatives, blue areas correspond to false positives and green represents true positives.

 This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png This figure as: png