🤖 AI Summary
Ensuring provably safe reach-avoid control for black-box dynamical robots navigating narrow obstacle gaps remains challenging due to the absence of analytical dynamics models.
Method: This paper proposes a neural-network-based reachability analysis framework for safety verification. It extends reachable set computation to ReLU neural network–modeled black-box systems—novelty lies in integrating piecewise-affine approximation, explicit modeling error bounds, and robust reach-avoid verification, thereby overcoming PARC’s reliance on explicit dynamical models.
Contribution/Results: The framework provides formal closed-loop safety guarantees and enables high-confidence controller synthesis with low interaction cost. Experimental validation spans both simulation and physical platforms: an extreme-drift parking model car and a deep reinforcement learning–controlled unmanned surface vehicle under strong disturbances. In all cases, it synthesizes trajectories that are formally verifiable for safety.
📝 Abstract
In the classical reach-avoid problem, autonomous mobile robots are tasked to reach a goal while avoiding obstacles. However, it is difficult to provide guarantees on the robot's performance when the obstacles form a narrow gap and the robot is a black-box (i.e. the dynamics are not known analytically, but interacting with the system is cheap). To address this challenge, this paper presents NeuralPARC. The method extends the authors' prior Piecewise Affine Reach-avoid Computation (PARC) method to systems modeled by rectified linear unit (ReLU) neural networks, which are trained to represent parameterized trajectory data demonstrated by the robot. NeuralPARC computes the reachable set of the network while accounting for modeling error, and returns a set of states and parameters with which the black-box system is guaranteed to reach the goal and avoid obstacles. NeuralPARC is shown to outperform PARC, generating provably-safe extreme vehicle drift parking maneuvers in simulations and in real life on a model car, as well as enabling safety on an autonomous surface vehicle (ASV) subjected to large disturbances and controlled by a deep reinforcement learning (RL) policy.