Method

PT-ResNet [PT-ResNet]


Submitted on 4 Jan. 2019 08:11 by
Rui Fan (the Hong Kong University of Science and Technology)

Running time:0.3 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
(1) Input: The input of the neural network is a
7D image which consists of two RGB images and one
disparity map. The RGB images are captured using
a pair of synchronised stereo cameras, and the
right image is transformed into the perspective
view of the left image.
(2) model: DeeplabV3+
Parameters:
heightxwigthx7
Latex Bibtex:

Evaluation in Bird's Eye View


Benchmark MaxF AP PRE REC FPR FNR
UM_ROAD 91.05 % 91.32 % 91.50 % 90.61 % 3.83 % 9.39 %
UMM_ROAD 93.03 % 93.06 % 92.19 % 93.88 % 8.74 % 6.12 %
UU_ROAD 87.93 % 88.95 % 87.72 % 88.15 % 4.02 % 11.85 %
URBAN_ROAD 91.19 % 91.21 % 90.78 % 91.60 % 5.13 % 8.40 %
This table as LaTeX

Behavior Evaluation


This table as LaTeX

Road/Lane Detection

The following plots show precision/recall curves for the bird's eye view evaluation.



This figure as: png eps pdf

This figure as: png eps pdf

This figure as: png eps pdf

This figure as: png eps pdf

Distance-dependent Behavior Evaluation

The following plots show the F1 score/Precision/Hitrate with respect to the longitudinal distance which has been used for evaluation.


Visualization of Results

The following images illustrate the performance of the method qualitatively on a couple of test images. We first show results in the perspective image, followed by evaluation in bird's eye view. Here, red denotes false negatives, blue areas correspond to false positives and green represents true positives.



This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png


eXTReMe Tracker