🤖 AI Summary
This work addresses the linear quadratic regulator (LQR) control problem with unknown parameters, aiming to overcome the fundamental limitation of existing stochastic regret bounds by establishing an almost-sure (a.s.) optimal-order regret upper bound. We propose an adaptive LQR controller equipped with a circuit-breaking mechanism: this mechanism ensures both parameter estimation convergence and closed-loop safety, triggers only finitely many times, and preserves asymptotic optimality. The design integrates adaptive parameter estimation, robust control synthesis, and probabilistic convergence analysis. We rigorously prove that the cumulative regret grows almost surely as $ ilde{mathcal{O}}(sqrt{T})$, achieving— for the first time—the optimal rate in the almost-sure sense. Extensive simulations on the Tennessee Eastman process demonstrate the method’s closed-loop stability, safety guarantees, and efficient learning performance.
📝 Abstract
The Linear-Quadratic Regulation (LQR) problem with unknown system parameters has been widely studied, but it has remained unclear whether $ ilde{ mathcal{O}}(sqrt{T})$ regret, which is the best known dependence on time, can be achieved almost surely. In this paper, we propose an adaptive LQR controller with almost surely $ ilde{ mathcal{O}}(sqrt{T})$ regret upper bound. The controller features a circuit-breaking mechanism, which circumvents potential safety breach and guarantees the convergence of the system parameter estimate, but is shown to be triggered only finitely often and hence has negligible effect on the asymptotic performance of the controller. The proposed controller is also validated via simulation on Tennessee Eastman Process~(TEP), a commonly used industrial process example.