Method

Hyperbolic Cosine Transformer for LiDAR 3D Object Detection [ChTR3D]


Submitted on 11 Aug. 2022 09:01 by
帆航 杨 (天津理工大学)

Running time:0.06 s
Environment:1 core @ 2.5 Ghz (Python + C/C++)

Method Description:
Our method first samples keypoints in the
proposals generated by efficient RPN. And then the
ChTR3D encode in linear computation complexity
rich contextual dependencies among points by the
proposed cosh-self-attention module. Subsequently,
the encoded point features are decoded by the
cosh-cross-attention module and FFN to obtain the
final prediction 3D bounding boxes.
Parameters:
a=1.1
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 96.22 % 95.26 % 90.51 %
Car (Orientation) 96.21 % 95.16 % 90.37 %
Car (3D Detection) 90.43 % 82.02 % 77.42 %
Car (Bird's Eye View) 92.72 % 89.04 % 86.29 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker