Bayesian Optimization with Expected Improvement: No Regret and the Choice of Incumbent

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the open problem of establishing a cumulative regret upper bound for Gaussian process expected improvement (GP-EI) under noisy observations. We propose the first unified theoretical analysis framework for three prevalent incumbent selection strategies—best posterior mean incumbent (BPMI), best sampled posterior mean incumbent (BSPMI), and best observed incumbent (BOI). Through rigorous derivation, we prove that GP-EI variants based on posterior mean and sampled posterior mean are no-regret. Moreover, we provide the first convergence guarantee for BOI. Under the squared exponential and Matérn kernels, we derive tight cumulative regret bounds of $ ilde{O}(sqrt{Tgamma_T})$, where $gamma_T$ denotes the maximum information gain. Empirical evaluations validate the tightness of our theoretical bounds. Our results furnish critical theoretical foundations for parameter selection and practical deployment of EI-based Bayesian optimization.

Technology Category

Application Category

📝 Abstract
Expected improvement (EI) is one of the most widely used acquisition functions in Bayesian optimization (BO). Despite its proven empirical success in applications, the cumulative regret upper bound of EI remains an open question. In this paper, we analyze the classic noisy Gaussian process expected improvement (GP-EI) algorithm. We consider the Bayesian setting, where the objective is a sample from a GP. Three commonly used incumbents, namely the best posterior mean incumbent (BPMI), the best sampled posterior mean incumbent (BSPMI), and the best observation incumbent (BOI) are considered as the choices of the current best value in GP-EI. We present for the first time the cumulative regret upper bounds of GP-EI with BPMI and BSPMI. Importantly, we show that in both cases, GP-EI is a no-regret algorithm for both squared exponential (SE) and Matérn kernels. Further, we present for the first time that GP-EI with BOI either achieves a sublinear cumulative regret upper bound or has a fast converging noisy simple regret bound for SE and Matérn kernels. Our results provide theoretical guidance to the choice of incumbent when practitioners apply GP-EI in the noisy setting. Numerical experiments are conducted to validate our findings.
Problem

Research questions and friction points this paper is trying to address.

Analyzing cumulative regret bounds for Bayesian optimization with Expected Improvement
Theoretical guidance for incumbent selection in noisy Gaussian process settings
Proving no-regret properties for GP-EI with different kernel functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes GP-EI algorithm with three incumbent choices
Proves no-regret property for BPMI and BSPMI incumbents
Provides theoretical guidance for incumbent selection
🔎 Similar Papers
No similar papers found.
J
Jingyi Wang
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA
H
Haowei Wang
Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore
S
Szu Hui Ng
Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore
Cosmin G. Petra
Cosmin G. Petra
Computational Mathematician, Lawrence Livermore National Laboratory
mathematical optimizationstochastic programminghigh-performance computinglinear algebra