Method

Paint and Distill: Boosting 3D Object Detection with Semantic Passing Network [SPNet]
[Anonymous Submission]

Submitted on 7 Jun. 2022 09:54 by
[Anonymous Submission]

Running time:0.08 s
Environment:1 core @ 2.5 Ghz (C/C++)

Method Description:
In this work, we propose a novel semantic passing
framework, named SPNet, to boost the performance
of existing lidar-based 3D detection models with
the guidance of rich context painting, with no
extra computation cost during inference. Our key
design is to first exploit the potential
instructive semantic knowledge within the ground-
truth labels by training a semantic-painted
teacher model and then guide the pure-lidar
network to learn the semantic-painted
representation via knowledge passing modules at
different granularities: class-wise passing,
pixel-wise passing and instance-wise passing.
Parameters:
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 95.99 % 93.23 % 92.60 %
Car (Orientation) 95.97 % 93.11 % 92.40 %
Car (3D Detection) 88.53 % 82.11 % 77.41 %
Car (Bird's Eye View) 92.29 % 88.92 % 86.16 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



Orientation estimation results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker