Lyapunov Stability Learning with Nonlinear Control via Inductive Biases

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning control Lyapunov functions (CLFs) for safety-critical systems faces challenges including poor training convergence, complex formal verification, and limited regions of attraction (ROAs). Method: This paper proposes explicitly encoding Lyapunov stability conditions as inductive biases into neural network architectures, enabling end-to-end joint learning of CLFs and nonlinear controllers. Crucially, Lyapunov conditions are embedded as structural constraints within the model—not as external optimization penalties—thereby simplifying both training objectives and verification logic. Contribution/Results: We theoretically identify cumulative constraint violation as the root cause of declining success rates in conventional iterative training. Experiments across multiple nonlinear dynamical systems demonstrate that our framework significantly improves CLF learning convergence speed and success rate, while expanding the certified ROA by up to 2.3×. These results validate the method’s effectiveness and robustness for stabilizing complex, safety-critical dynamics.

Technology Category

Application Category

📝 Abstract
Finding a control Lyapunov function (CLF) in a dynamical system with a controller is an effective way to guarantee stability, which is a crucial issue in safety-concerned applications. Recently, deep learning models representing CLFs have been applied into a learner-verifier framework to identify satisfiable candidates. However, the learner treats Lyapunov conditions as complex constraints for optimisation, which is hard to achieve global convergence. It is also too complicated to implement these Lyapunov conditions for verification. To improve this framework, we treat Lyapunov conditions as inductive biases and design a neural CLF and a CLF-based controller guided by this knowledge. This design enables a stable optimisation process with limited constraints, and allows end-to-end learning of both the CLF and the controller. Our approach achieves a higher convergence rate and larger region of attraction (ROA) in learning the CLF compared to existing methods among abundant experiment cases. We also thoroughly reveal why the success rate decreases with previous methods during learning.
Problem

Research questions and friction points this paper is trying to address.

Improving convergence in learning control Lyapunov functions for stability
Simplifying verification complexity of Lyapunov conditions in control systems
Enhancing region of attraction through neural CLF and controller design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Treats Lyapunov conditions as inductive biases
Designs neural CLF and controller with stability guidance
Enables end-to-end learning with constrained optimization
🔎 Similar Papers
No similar papers found.
Yupu Lu
Yupu Lu
The University of Hong Kong
Physics-informed learningplanning and controlHuman-Computer Interaction
Shijie Lin
Shijie Lin
The University of Hong Kong
Event-based VisionSLAMRoboticsComputational Imaging
H
Hao Xu
School of Computing and Data Science, the University of Hong Kong and the Centre for Garment Production Limited (TransGP), Hong Kong SAR
Zeqing Zhang
Zeqing Zhang
The University of Hong Kong
robotic manipulationmulti-agent systemcollision detection
J
Jia Pan
School of Computing and Data Science, the University of Hong Kong and the Centre for Garment Production Limited (TransGP), Hong Kong SAR