WP

33 records found

IM-TD3

A Reinforcement Learning Approach for Liquid Rocket Engine Start-Up Optimization

With advancements in reusable liquid rocket engine technology to meet the diverse demands of space missions, engine systems have become increasingly complex. In most cases, these engines rely on stable open-loop control and closed-loop regulation systems. However, due to the high ...
This letter explores the problem of delivering unwieldy objects using nonholonomic mobile bases. We propose a new approach called free pushing to address this challenge. Unlike previous stable pushing methods which maintain a stiff robot-object contact, our approach allows the ro ...
Advancing autonomous spacecraft proximity maneuvers and docking (PMD) is crucial for enhancing the efficiency and safety of inter-satellite services. One primary challenge in PMD is the accurate a priori definition of the system model, often complicated by inherent uncertainties ...

DACOOP-A

Decentralized Adaptive Cooperative Pursuit via Attention

Integrating rule-based policies into reinforcement learning promises to improve data efficiency and generalization in cooperative pursuit problems. However, most implementations do not properly distinguish the influence of neighboring robots in observation embedding or inter-robo ...
In recent years, safe reinforcement learning (RL) with the actor-critic structure has gained significant interest for continuous control tasks. However, achieving near-optimal control policies with safety and convergence guarantees remains challenging. Moreover, few works have fo ...
This article explores deep reinforcement learning (DRL) for the flocking control of unmanned aerial vehicle (UAV) swarms. The flocking control policy is trained using a centralized-learning-decentralized-execution (CTDE) paradigm, where a centralized critic network augmented with ...
This letter addresses the problem of pushing manipulation with nonholonomic mobile robots. Pushing is a fundamental skill that enables robots to move unwieldy objects that cannot be grasped. We propose a stable pushing method that maintains stiff contact between the robot and the ...
Machine learning can be effectively applied in control loops to make optimal control decisions robustly. There is increasing interest in using spiking neural networks (SNNs) as the apparatus for machine learning in control engineering because SNNs can potentially offer high energ ...
Reinforcement learning (RL) exhibits impressive performance when managing complicated control tasks for robots. However, its wide application to physical robots is limited by the absence of strong safety guarantees. To overcome this challenge, this paper explores the control Lyap ...
This paper investigates the deep reinforcement learning based secure control problem for cyber–physical systems (CPS) under false data injection attacks. We describe the CPS under attacks as a Markov decision process (MDP), based on which the secure controller design for CPS unde ...
The problem of learning-based control for robots has been extensively studied, whereas the security issue under malicious adversaries has not been paid much attention to. Malicious adversaries can invade intelligent devices and communication networks used in robots, causing incid ...
This article proposes a fuzzy adaptive design solving the finite-time constrained tracking for hypersonic flight vehicles (HFVs). Actuator dynamics and asymmetric time-varying constraints are considered when solving this problem. The main features of the proposed design lie in 1) ...
Koopman operators are of infinite dimension and capture the characteristics of nonlinear dynamics in a lifted global linear manner. The finite data-driven approximation of Koopman operators results in a class of linear predictors, useful for formulating linear model predictive co ...
This paper proposes a sparse Bayesian treatment of deep neural networks (DNNs) for system identification. Although DNNs show impressive approximation ability in various fields, several challenges still exist for system identification problems. First, DNNs are known to be too comp ...
Multi-robot formation control has been intensively studied in recent years. In practical applications, the multi-robot system's ability to independently change the formation to avoid collision among the robots or with obstacles is critical. In this study, a multi-robot adaptive f ...
Distributed model predictive control (DMPC) concerns how to online control multiple robotic systems with constraints effectively. However, the nonlinearity, nonconvexity, and strong interconnections of dynamic system models and constraints can make the real-time and real-world DM ...
This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning t ...
This paper presents a deep reinforcement learning (DRL) algorithm for orientation estimation using inertial sensors combined with a magnetometer. Lyapunov’s method in control theory is employed to prove the convergence of orientation estimation errors. The estimator gains and a L ...
Performing multiple experiments is common when learning internal mechanisms of complex systems. These experiments can include perturbations of parameters or external disturbances. A challenging problem is to efficiently incorporate all collected data simultaneously to infer the u ...
This article investigates the zero-sum game-based secure control problem for cyber-physical systems (CPS) under the actuator false data injection attacks. The physical process is described as a linear time-invariant discrete-time model. Both the process noise and the measurement ...