High Dimensional Bayesian Optimization using Lasso Variable Selection

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional Bayesian optimization suffers from poor sample efficiency and high computational cost due to the curse of dimensionality. This paper proposes LengthScale-Lasso: a method that quantifies variable importance via Gaussian process kernel length-scale estimation, integrates Lasso-style structured sparsity to dynamically identify critical variables, and restricts acquisition function optimization to the corresponding low-dimensional subspace. It is the first approach to jointly couple length-scale sensitivity analysis with structured sparse selection. Theoretically, it guarantees sublinear cumulative regret growth—specifically, (O(sqrt{T}))—while substantially reducing computational complexity. Empirically, LengthScale-Lasso outperforms state-of-the-art methods on high-dimensional synthetic benchmarks and real-world applications, including hyperparameter tuning and robot control. The method thus achieves a favorable balance between strong theoretical guarantees and practical efficiency.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) is a leading method for optimizing expensive black-box optimization and has been successfully applied across various scenarios. However, BO suffers from the curse of dimensionality, making it challenging to scale to high-dimensional problems. Existing work has adopted a variable selection strategy to select and optimize only a subset of variables iteratively. Although this approach can mitigate the high-dimensional challenge in BO, it still leads to sample inefficiency. To address this issue, we introduce a novel method that identifies important variables by estimating the length scales of Gaussian process kernels. Next, we construct an effective search region consisting of multiple subspaces and optimize the acquisition function within this region, focusing on only the important variables. We demonstrate that our proposed method achieves cumulative regret with a sublinear growth rate in the worst case while maintaining computational efficiency. Experiments on high-dimensional synthetic functions and real-world problems show that our method achieves state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses high-dimensional challenges in Bayesian optimization
Improves sample efficiency via variable selection
Achieves sublinear regret growth with computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Lasso for variable selection in optimization
Estimates Gaussian process kernel length scales
Optimizes acquisition in multi-subspace search regions
🔎 Similar Papers
No similar papers found.
V
Vu Viet Hoang
FPT Software AI Center
Hung The Tran
Hung The Tran
AI Center, VNPT Media
Machine LearningOptimizationReinforcement LearningLarge Language Models
S
Sunil Gupta
Applied AI Institute, Deakin University
V
Vu Nguyen
Amazon, Australia