Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles

More Info
expand_more

Abstract

This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence. In the proposed control design, a nominal system is considered for the design of a baseline tracking controller using a conventional control approach. The nominal system also defines the desired behaviour of uncertain autonomous surface vehicles in an obstacle-free environment. Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance at the same time in environments with obstacles. In comparison to traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm using an example of autonomous surface vehicles.

Files

Model_Reference_Reinforcement_... (pdf)
(pdf | 2.36 Mb)
Unknown license

Download not available

Model_Reference_Reinforcement_... (pdf)
(pdf | 2.23 Mb)
- Embargo expired in 15-12-2021
Unknown license