Method

3D Dual-Fusion [3D Dual-Fusion]
https://github.com/rasd3/3D-Dual-Fusion

Submitted on 19 Aug. 2022 06:38 by
Yecheol Kim (Hanyang University)

Running time:0.1 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
We propose a novel camera-LiDAR fusion architecture
called, 3D Dual-Fusion, which is designed to
mitigate the gap between the feature
representations of camera and LiDAR data. The
proposed method fuses the features of the camera-
view and 3D voxel-view domain and models their
interactions through deformable attention.
Parameters:
TBA
Latex Bibtex:
@article{kim20223d,
title={3D Dual-Fusion: Dual-Domain Dual-Query
Camera-LiDAR Fusion for 3D Object Detection},
author={Kim, Yecheol and Park, Konyul and Kim,
Minwook and Kum, Dongsuk and Choi, Jun Won},
journal={arXiv preprint arXiv:2211.13529},
year={2022}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 96.54 % 95.82 % 93.11 %
Car (Orientation) 96.53 % 95.76 % 93.01 %
Car (3D Detection) 91.01 % 82.40 % 79.39 %
Car (Bird's Eye View) 93.08 % 90.86 % 86.44 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker