Method

GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds [GD-MAE]


Submitted on 6 Mar. 2023 09:32 by
Honghui Yang (Zhejiang University)

Running time:0.07 s
Environment:1 core @ 2.5 Ghz (Python + C/C++)

Method Description:
TBD
Parameters:
TBD
Latex Bibtex:
@inproceedings{yang2023gd-mae,
title={GD-MAE: Generative Decoder for MAE Pre-
training on LiDAR Point Clouds},
author={Honghui Yang and Tong He and Jiaheng Liu and
Hua Chen and Boxi Wu and Binbin Lin and Xiaofei He
and Wanli Ouyang},
booktitle={CVPR},
year={2023}
}

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 98.38 % 95.54 % 90.42 %
Car (Orientation) 98.31 % 95.36 % 90.19 %
Car (3D Detection) 88.14 % 79.03 % 73.55 %
Car (Bird's Eye View) 94.22 % 88.82 % 83.54 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker