Method

Multi-Modal Fusion Method [DPPFA-Net]


Submitted on 4 Jan. 2023 15:51 by
Roland xx (HKU)

Running time:0.1 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
multi-modal fusion method
Parameters:
Epoch=80
Latex Bibtex:
@ARTICLE{10308573,
author={Wang, Juncheng and Kong, Xiangbo and
Nishikawa, Hiroki and Lian, Qiuyou and Tomiyama,
Hiroyuki},
journal={IEEE Internet of Things Journal},
title={Dynamic Point-Pixel Feature Alignment for
Multi-modal 3D Object Detection},
year={2023},
volume={},
number={},
pages={1-1},
doi={10.1109/JIOT.2023.3329884}}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Pedestrian (Detection) 67.68 % 59.52 % 56.87 %
Pedestrian (Orientation) 56.13 % 48.38 % 45.93 %
Pedestrian (3D Detection) 53.58 % 46.14 % 42.59 %
Pedestrian (Bird's Eye View) 57.02 % 50.55 % 47.25 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker