Method

YOLOv2 416x416 detection framework [YOLOv2]
https://pjreddie.com/darknet/yolo/

Submitted on 10 Apr. 2017 17:13 by
Alireza Asvadi ()

Running time:0.02 s
Environment:GPU @ 3.5 Ghz (C/C++)

Method Description:
An experiment using the base YOLOv2 416 x 416
detection framework with default weights (without
training on KITTI). The 'person', 'bicycle', and
'car' classes (out of YOLOv2/COCO's 80 object
categories) are considered as 'Pedestrian',
'Cyclist', and 'Car' classes.
Parameters:
NA
Latex Bibtex:
@inproceedings{redmon2016you,
title={You only look once: Unified, real-time
object detection},
author={Redmon, Joseph and Divvala, Santosh and
Girshick, Ross and Farhadi, Ali},
booktitle={Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition},
pages={779--788},
year={2016}
}

@inproceedings{redmon2017yolo9000,
title={YOLO9000: Better, Faster, Stronger},
author={Redmon, Joseph and Farhadi, Ali},
booktitle={Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition},
year={2017}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 28.37 % 19.31 % 15.94 %
Pedestrian (Detection) 20.80 % 16.19 % 15.43 %
Cyclist (Detection) 4.55 % 4.55 % 4.55 %
This table as LaTeX


This figure as: png eps pdf txt gnuplot



This figure as: png eps pdf txt gnuplot



This figure as: png eps pdf txt gnuplot




eXTReMe Tracker