Stochastic 2D LiDAR-Camera Sensor Fusion for Real-Time Object Mapping and Tracking

More Info
expand_more

Abstract

Spatial object detection and environmental understanding are fundamental aspects of autonomous driving in mobile robots. In this work, a stochastic approach to 2D LiDAR and stereo camera sensor fusion for object detection is presented. By tracking LiDAR clusters 360° around the robot and continuously updating knowledge of their corresponding object types using Bayesian inference when they are within the camera’s view, uncertainties in the detection algorithm are addressed. Additionally, localisation and sensor uncertainties are accounted for by using Gaussian distributions to model the locations of entities. By estimating the state of these entities, dynamic objects can be tracked throughout the environment, while remembering their type from when they were last visible in the camera’s view. The results demonstrate correct linking of camera detections with LiDAR clusters, with an average positional inaccuracy of objects of 0.12m in simulation, and 0.25m across different experiments on the robot, as well as the ability to track humans even when they move out of the camera’s field of view with an average tracking error of 0.33m in simulation.

Files

Thesis_Thijs_Domburg.pdf
(pdf | 8.1 Mb)
Unknown license