This Review discusses the main results obtained in training end-to-end neural architectures for guidance and control of interplanetary transfers, planetary landings, and close-proximity operations, highlighting the successful learning of optimality principles by the underlying ne
...
This Review discusses the main results obtained in training end-to-end neural architectures for guidance and control of interplanetary transfers, planetary landings, and close-proximity operations, highlighting the successful learning of optimality principles by the underlying neural models. Spacecraft and drones aimed at exploring our solar system are designed to operate in conditions where the smart use of onboard resources is vital to the success or failure of the mission. Sensorimotor actions are thus often derived from high-level, quantifiable, optimality principles assigned to each task, using consolidated tools in optimal control theory. The planned actions are derived on the ground and transferred on board, where controllers have the task of tracking the uploaded guidance profile. Here, we review recent trends based on the use of end-to-end networks, called guidance and control networks (G&CNets), which allow spacecraft to depart from such an architecture and to embrace the onboard computation of optimal actions. In this way, the sensor information is transformed in real time into optimal plans, thus increasing mission autonomy and robustness. We then analyze drone racing as an ideal gym environment to test these architectures on real robotic platforms and thus increase confidence in their use in future space exploration missions. Drone racing not only shares with spacecraft missions both limited onboard computational capabilities and similar control structures induced from the optimality principle sought but also entails different levels of uncertainties and unmodeled effects and a very different dynamical timescale.
@en