Statistically Assuring Safety of Control Systems using Ensembles of Safety Filters and Conformal Prediction

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hamilton–Jacobi (HJ) reachability analysis for learning-based control systems lacks statistically rigorous safety guarantees. Method: We propose a safety verification framework integrating conformal prediction with multi-valued function ensembling. Specifically: (1) we develop a conformal prediction–based uncertainty quantification mechanism to safely calibrate the learned HJ value function; (2) we design a multi-valued function ensembling safety filter to enhance policy robustness; and (3) we establish a verifiable safety-driven policy switching mechanism that unifies HJ analysis, reinforcement learning, and ensemble learning. Results: Experiments demonstrate that our approach delivers strict probabilistic safety guarantees in high-dimensional, unknown dynamical environments, significantly reducing the risk of entering unsafe sets. To the best of our knowledge, this is the first work to incorporate conformal prediction into HJ reachability–based safety verification—achieving a unified solution that is statistically sound, computationally tractable, and deployment-reliable.

Technology Category

Application Category

📝 Abstract
Safety assurance is a fundamental requirement for deploying learning-enabled autonomous systems. Hamilton-Jacobi (HJ) reachability analysis is a fundamental method for formally verifying safety and generating safe controllers. However, computing the HJ value function that characterizes the backward reachable set (BRS) of a set of user-defined failure states is computationally expensive, especially for high-dimensional systems, motivating the use of reinforcement learning approaches to approximate the value function. Unfortunately, a learned value function and its corresponding safe policy are not guaranteed to be correct. The learned value function evaluated at a given state may not be equal to the actual safety return achieved by following the learned safe policy. To address this challenge, we introduce a conformal prediction-based (CP) framework that bounds such uncertainty. We leverage CP to provide probabilistic safety guarantees when using learned HJ value functions and policies to prevent control systems from reaching failure states. Specifically, we use CP to calibrate the switching between the unsafe nominal controller and the learned HJ-based safe policy and to derive safety guarantees under this switched policy. We also investigate using an ensemble of independently trained HJ value functions as a safety filter and compare this ensemble approach to using individual value functions alone.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety guarantees for learning-enabled autonomous control systems
Addressing computational complexity of Hamilton-Jacobi reachability in high dimensions
Providing probabilistic safety bounds for learned value functions and policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensembles of safety filters enhance system reliability
Conformal prediction bounds uncertainty in learned policies
Switching mechanism between unsafe and safe controllers calibrated
🔎 Similar Papers
No similar papers found.
I
Ihab Tabbara
Washington University in St. Louis
Y
Yuxuan Yang
Washington University in St. Louis
Hussein Sibai
Hussein Sibai
Washington University in St. Louis
Control TheoryFormal MethodsMachine LearningRobotics