🤖 AI Summary
Formal safety verification of black-box stochastic dynamical systems—such as autonomous vehicles and medical AI—remains intractable due to unknown dynamics and distributional uncertainty.
Method: We propose a spectral barrier function framework for uncertainty-aware safety certification. It employs finite Fourier kernel expansions to transform the semi-infinite, non-convex safety verification problem into a tractable linear program. A reproducing kernel Hilbert space (RKHS) ambiguity set models state-transition uncertainty, while conditional mean embeddings and distributionally robust optimization enable rigorous certification against out-of-distribution disturbances. The method requires only finite trajectory data and no knowledge of internal system structure.
Contribution/Results: Evaluated on multiple high-dimensional nonlinear benchmarks, our approach demonstrates computational efficiency, scalability, and strict formal guarantees. It provides the first computationally feasible, quantifiable, and distributionally robust safety certification scheme for high-stakes AI systems.
📝 Abstract
Ensuring the safety of AI-enabled systems, particularly in high-stakes domains such as autonomous driving and healthcare, has become increasingly critical. Traditional formal verification tools fall short when faced with systems that embed both opaque, black-box AI components and complex stochastic dynamics. To address these challenges, we introduce LUCID (Learning-enabled Uncertainty-aware Certification of stochastIc Dynamical systems), a verification engine for certifying safety of black-box stochastic dynamical systems from a finite dataset of random state transitions. As such, LUCID is the first known tool capable of establishing quantified safety guarantees for such systems. Thanks to its modular architecture and extensive documentation, LUCID is designed for easy extensibility. LUCID employs a data-driven methodology rooted in control barrier certificates, which are learned directly from system transition data, to ensure formal safety guarantees. We use conditional mean embeddings to embed data into a reproducing kernel Hilbert space (RKHS), where an RKHS ambiguity set is constructed that can be inflated to robustify the result to out-of-distribution behavior. A key innovation within LUCID is its use of a finite Fourier kernel expansion to reformulate a semi-infinite non-convex optimization problem into a tractable linear program. The resulting spectral barrier allows us to leverage the fast Fourier transform to generate the relaxed problem efficiently, offering a scalable yet distributionally robust framework for verifying safety. LUCID thus offers a robust and efficient verification framework, able to handle the complexities of modern black-box systems while providing formal guarantees of safety. These unique capabilities are demonstrated on challenging benchmarks.