Data-driven learning of feedback maps for explicit robust predictive control: an approximation theoretic view

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the data-driven robust model predictive control (MPC) problem for systems with linear-noise dynamics, quadratic costs, and convex state, input, and disturbance constraints. The proposed method first generates high-fidelity state-action data via exact solution of a convex semi-infinite program on a gridded domain; it then constructs feedback mappings using polynomial or piecewise-affine approximations with certified uniform error bounds. Crucially, the approximation error is explicitly incorporated into controller synthesis to rigorously ensure closed-loop recursive feasibility and input-to-state stability. Unlike conventional approximation-based MPC schemes, the approach avoids conservatism, eliminates online optimization, and provides provable robustness and performance guarantees. Evaluated on two benchmark numerical examples, the learned policy satisfies all constraints and achieves stable closed-loop control.

Technology Category

Application Category

📝 Abstract
We establish an algorithm to learn feedback maps from data for a class of robust model predictive control (MPC) problems. The algorithm accounts for the approximation errors due to the learning directly at the synthesis stage, ensuring recursive feasibility by construction. The optimal control problem consists of a linear noisy dynamical system, a quadratic stage and quadratic terminal costs as the objective, and convex constraints on the state, control, and disturbance sequences; the control minimizes and the disturbance maximizes the objective. We proceed via two steps -- (a) Data generation: First, we reformulate the given minmax problem into a convex semi-infinite program and employ recently developed tools to solve it in an exact fashion on grid points of the state space to generate (state, action) data. (b) Learning approximate feedback maps: We employ a couple of approximation schemes that furnish tight approximations within preassigned uniform error bounds on the admissible state space to learn the unknown feedback policy. The stability of the closed-loop system under the approximate feedback policies is also guaranteed under a standard set of hypotheses. Two benchmark numerical examples are provided to illustrate the results.
Problem

Research questions and friction points this paper is trying to address.

Learning feedback maps from data for robust MPC
Ensuring recursive feasibility despite approximation errors
Guaranteeing closed-loop stability under approximate policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning feedback maps from data
Ensuring recursive feasibility by construction
Approximating feedback policies with error bounds
🔎 Similar Papers
No similar papers found.