Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization

📅 2022-06-07
📈 Citations: 1
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Standard Gaussian processes (GPs) in Bayesian optimization suffer from inaccurate predictions in target regions (e.g., low-function-value areas) due to their inherent stationarity assumption, which fails on non-stationary objective functions. Method: We propose relaxed Gaussian processes (reGP), a novel GP modeling framework that weakens interpolation constraints outside the region of interest while enforcing bounded mean predictions within it. reGP further introduces a target-region-weighted prediction mechanism. Contribution/Results: This is the first systematic integration of goal-oriented principles into GP kernel design. We theoretically prove that reGP—when combined with the Expected Improvement (EI) acquisition function—guarantees global convergence. Leveraging reproducing kernel Hilbert space (RKHS) analysis, reGP significantly improves predictive accuracy and convergence speed on non-stationary functions. Empirical evaluation across diverse benchmark tasks demonstrates that reGP consistently outperforms standard stationary GPs, yielding higher-quality optima with fewer iterations.
📝 Abstract
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert space attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial.
Problem

Research questions and friction points this paper is trying to address.

Improves predictive distributions in Gaussian process modeling
Relaxes interpolation constraints outside ranges of interest
Enhances Bayesian optimization for objective function minimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relaxed Gaussian process interpolation for better predictions
Goal-oriented method enhances Bayesian optimization performance
Theoretical convergence guaranteed with expected improvement criterion
🔎 Similar Papers
No similar papers found.
S
S. Petit
Laboratoire National de MĂ©trologie et d’Essais, 78197, Trappes Cedex, France
Julien Bect
Julien Bect
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systÚmes
StatisticsStochastic processesGlobal optimizationMonte Carlo methodsDesign and Analysis of Computer Experiments
E
E. VĂĄzquez
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systÚmes, 91190, Gif-sur-Yvette, France