🤖 AI Summary
This paper addresses the optimal control of nonlinear systems with unknown dynamics and only sparse, noisy output measurements. We propose a Bayesian inference-based continuous-time dynamical learning framework. The method constructs a joint prior over continuous-time state-space dynamics and latent state trajectories, and employs a goal-directed marginal Metropolis–Hastings sampler to perform posterior inference—unifying the characterization of both model and measurement uncertainty. Technically, it integrates Bayesian system identification, Markov chain Monte Carlo (MCMC) sampling, ordinary differential equation (ODE) numerical integration, and scenario-based optimal control. Evaluated on a Type 1 diabetes glucose regulation simulation, the approach achieves robust, safe, and high-performance closed-loop control using only 1–2 noisy observations per hour. It significantly enhances control reliability and generalization under extreme data scarcity.
📝 Abstract
Reliable optimal control is challenging when the dynamics of a nonlinear system are unknown and only infrequent, noisy output measurements are available. This work addresses this setting of limited sensing by formulating a Bayesian prior over the continuous-time dynamics and latent state trajectory in state-space form and updating it through a targeted marginal Metropolis-Hastings sampler equipped with a numerical ODE integrator. The resulting posterior samples are used to formulate a scenario-based optimal control problem that accounts for both model and measurement uncertainty and is solved using standard nonlinear programming methods. The approach is validated in a numerical case study on glucose regulation using a Type 1 diabetes model.