🤖 AI Summary
To address the challenge of jointly ensuring safety and autonomy in real-world robotic reinforcement learning, this paper proposes a reach-avoid optimal value function–based safety filter that concurrently guarantees primary task execution (e.g., environmental exploration) and autonomous safe recovery (e.g., returning to a charging station). The method innovatively constructs a minimal-intervention safety filter directly from the value function derived via Hamilton–Jacobi reachability analysis—overcoming key limitations of conventional control barrier functions in handling nonlinear dynamics, coupled constraints, and model uncertainty. Integrated with an enhanced Soft Actor–Critic (SAC) algorithm and a robust recovery policy, the approach achieves 100% autonomous recovery success on the cart-pole swing-up task while incurring less than 3% performance degradation on the primary task. This demonstrates, for the first time, the unified attainment of provable safety and high sample efficiency without human intervention.
📝 Abstract
Designing controllers that accomplish tasks while guaranteeing safety constraints remains a significant challenge. We often want an agent to perform well in a nominal task, such as environment exploration, while ensuring it can avoid unsafe states and return to a desired target by a specific time. In particular we are motivated by the setting of safe, efficient, hands-off training for reinforcement learning in the real world. By enabling a robot to safely and autonomously reset to a desired region (e.g., charging stations) without human intervention, we can enhance efficiency and facilitate training. Safety filters, such as those based on control barrier functions, decouple safety from nominal control objectives and rigorously guarantee safety. Despite their success, constructing these functions for general nonlinear systems with control constraints and system uncertainties remains an open problem. This paper introduces a safety filter obtained from the value function associated with the reach-avoid problem. The proposed safety filter minimally modifies the nominal controller while avoiding unsafe regions and guiding the system back to the desired target set. By preserving policy performance while allowing safe resetting, we enable efficient hands-off reinforcement learning and advance the feasibility of safe training for real world robots. We demonstrate our approach using a modified version of soft actor-critic to safely train a swing-up task on a modified cartpole stabilization problem.