Method

Robust LiDAR odometry [RLO]
http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0005719006260633

Submitted on 13 May. 2019 15:19 by
Martin Dimitrievski (IPI/TELIN)

Running time:0.05 s
Environment:GPU @ 1.0 Ghz (C/C++)

Method Description:
In this paper, we propose a novel real-time method for SLAM in autonomous vehicles. The environment is mapped using a probabilistic occupancy map model and EGO motion is estimated within the same environment by using a feedback loop. Thus, we simplify the pose estimation from 6 to 3 degrees of freedom which greatly impacts the robustness and accuracy of the system. Input data is provided via a rotating laser scanner as 3D measurements of the current environment which are projected on the ground plane. The local ground plane is estimated in real-time from the actual point cloud data using a robust plane fitting scheme based on the RANSAC principle. Then the computed occupancy map is registered against the previous map using phase correlation in order to estimate the translation and rotation of the vehicle. Experimental results demonstrate that the method produces high-quality occupancy maps and the measured translation and rotation errors of the trajectories are lower compared to other 6DOF methods. The entire SLAM system runs on a mid-range GPU and keeps up with the data from the sensor which enables more computational power for the other tasks of the autonomous vehicle.
Parameters:
\enabled_ground_plane_estimation
Latex Bibtex:
@conference{visapp16,
author={Martin Dimitrievski. and Martin Dimitrievski. and David Van Hamme. and David Van Hamme. and Peter Veelaert. and Wilfried Philips. and Wilfried Philips.},
title={Robust Matching of Occupancy Maps for Odometry in Autonomous Vehicles},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={626-633},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005719006260633},
isbn={978-989-758-175-5},
}

Detailed Results

From all test sequences (sequences 11-21), our benchmark computes translational and rotational errors for all possible subsequences of length (5,10,50,100,150,...,400) meters. Our evaluation ranks methods according to the average of those values, where errors are measured in percent (for translation) and in degrees per meter (for rotation). Details for different trajectory lengths and driving speeds can be found in the plots underneath. Furthermore, the first 5 test trajectories and error plots are shown below.

Test Set Average

Sequence 11


Sequence 12


Sequence 13


Sequence 14


Sequence 15




eXTReMe Tracker