🤖 AI Summary
This paper investigates the asymptotic convergence of instantaneous regret in time-varying Bayesian optimization (TVBO): specifically, whether “no-regret” behavior—i.e., vanishing instantaneous regret—is achievable under dynamic environments. To this end, we establish, for the first time, an algorithm-agnostic asymptotic regret framework for TVBO, applicable to general stationary kernels. Leveraging tools from reproducing kernel Hilbert spaces and stochastic process theory, we derive universal upper and lower bounds on instantaneous regret. Crucially, we identify a necessary and sufficient condition for no-regret: a precise matching between the temporal variation rate of the objective function and the smoothness induced by the kernel. This condition provides the first verifiable guarantee of asymptotic optimality for dynamic black-box optimization. Our theoretical framework significantly advances the foundational understanding of TVBO, enabling rigorous analysis of regret behavior in non-stationary settings.
📝 Abstract
Time-Varying Bayesian Optimization (TVBO) is the go-to framework for optimizing a time-varying black-box objective function that may be noisy and expensive to evaluate. Is it possible for the instantaneous regret of a TVBO algorithm to vanish asymptotically, and if so, when? We answer this question of great theoretical importance by providing algorithm-independent lower regret bounds and upper regret bounds for TVBO algorithms, from which we derive sufficient conditions for a TVBO algorithm to have the no-regret property. Our analysis covers all major classes of stationary kernel functions.