Standard Gaussian Process Can Be Excellent for High-Dimensional Bayesian Optimization

📅 2024-02-05
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Standard Gaussian process (GP)–based Bayesian optimization (BO) is widely believed to underperform in high-dimensional settings, yet this belief lacks rigorous theoretical or empirical justification. Method: We systematically investigate the true sources of degradation and identify kernel selection and length-scale initialization—not dimensionality per se—as the primary bottlenecks. We theoretically and empirically refute the misconception that the squared-exponential (SE) kernel fails due to the curse of dimensionality. We propose a robust initialization strategy based on the Matérn kernel, requiring no additional prior knowledge, and derive probabilistic bounds on gradient vanishing. Results: Evaluated on 11 high-dimensional benchmarks, our approach elevates standard BO to state-of-the-art performance: the Matérn kernel achieves direct optimality, while the SE kernel—when properly initialized—attains near-optimal performance with significantly improved stability and optimization efficiency.

Technology Category

Application Category

📝 Abstract
A longstanding belief holds that Bayesian Optimization (BO) with standard Gaussian processes (GP) -- referred to as standard BO -- underperforms in high-dimensional optimization problems. While this belief seems plausible, it lacks both robust empirical evidence and theoretical justification. To address this gap, we present a systematic investigation. First, through a comprehensive evaluation across eleven widely used benchmarks, we found that while the popular Square Exponential (SE) kernel often leads to poor performance, using Matern kernels enables standard BO to consistently achieve top-tier results, frequently surpassing methods specifically designed for high-dimensional optimization. Second, our theoretical analysis reveals that the SE kernels failure primarily stems from improper initialization of the length-scale parameters, which are commonly used in practice but can cause gradient vanishing in training. We provide a probabilistic bound to characterize this issue, showing that Matern kernels are less susceptible and can robustly handle much higher dimensions. Third, we propose a simple robust initialization strategy that dramatically improves the performance of the SE kernel, bringing it close to state of the art methods, without requiring any additional priors or regularization. We prove another probabilistic bound that demonstrates how the gradient vanishing issue can be effectively mitigated with our method. Our findings advocate for a re-evaluation of standard BOs potential in high-dimensional settings.
Problem

Research questions and friction points this paper is trying to address.

Challenges standard BO performance in high dimensions
Identifies SE kernel issues with length-scale initialization
Proposes robust initialization to improve SE kernel performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matérn kernels enhance high-dimensional Bayesian Optimization
Robust initialization strategy improves SE kernel performance
Probabilistic bounds validate kernel performance and initialization
Z
Zhitong Xu
Kahlert School of Computing, University of Utah
Shandian Zhe
Shandian Zhe
School of Computing, University of Utah
Probabilistic Machine Learning
H
Haitao Wang
J
Jeff M. Phillips