Safe & Intelligent Control
Fault-tolerant Flight Control with Distributional and Hybrid Reinforcement Learning using DSAC and IDHP
More Info
expand_more
Abstract
The critical challenge for employing autonomous control systems in aircraft is ensuring robustness and safety. This study introduces an intelligent and fault-tolerant controller that merges two Reinforcement Learning (RL) algorithms in a hybrid approach: the Distributional Soft Actor-Critic (DSAC) and the Incremental Dual Heuristic Programming (IDHP). The integration combines the strengths of DSAC in learning a robust control strategy and IDHP in allowing real-time control adaption. Compared to earlier controllers, such as a hybrid using the Soft Actor-Critic (SAC) algorithm and strictly offline DSAC and SAC, our hybrid demonstrates enhanced robustness against changing flight conditions and in the face of sensor noise and bias. During fault tolerance tests, it maintains superior control even when the effectiveness of the aircraft’s ailerons and elevators is compromised. By demonstrating the potential of RL-based controllers to provide robustness and fault tolerance, this research advances the feasibility of safe and autonomous flight control operations.