🤖 AI Summary
Deep learning models in complex systems often lack physical consistency and control-theoretic convexity, hindering reliable deployment in safety-critical control applications.
Method: This paper proposes a symbol-constrained deep modeling and control framework that unifies monotonicity, positivity, and qualitative sign constraints into structural sign restrictions on the Jacobian matrix, enabling exact linearizability of the neural network architecture. The resulting model predictive controller inherently yields a convex quadratic program, guaranteeing a unique global optimum and a Lipschitz-continuous control law.
Results: Experiments on a two-tank system and a hybrid powertrain demonstrate that the framework significantly improves prediction accuracy over state-of-the-art methods, produces smoother and physically consistent control inputs, and rigorously ensures closed-loop stability.
📝 Abstract
Deep learning is increasingly used for complex, large-scale systems where first-principles modeling is difficult. However, standard deep learning models often fail to enforce physical structure or preserve convexity in downstream control, leading to physically inconsistent predictions and discontinuous inputs owing to nonconvexity. We introduce sign constraints--sign restrictions on Jacobian entries--that unify monotonicity, positivity, and sign-definiteness; additionally, we develop model-construction methods that enforce them, together with a control-synthesis procedure. In particular, we design exactly linearizable deep models satisfying these constraints and formulate model predictive control as a convex quadratic program, which yields a unique optimizer and a Lipschitz continuous control law. On a two-tank system and a hybrid powertrain, the proposed approach improves prediction accuracy and produces smoother control inputs than existing methods.