BILBO: BILevel Bayesian Optimization

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In black-box bilevel optimization, repeated lower-level optimization is computationally inefficient, and solutions become unreliable under observational noise. Method: This paper introduces the first joint Bayesian optimization framework, departing from conventional nested optimization. It establishes a cooperative modeling mechanism between upper- and lower-level problems: leveraging Gaussian process regression and a variant of the upper confidence bound (UCB), it explicitly models lower-level solution uncertainty and incorporates a conditional reweighting strategy, enabling a single-query confidence-bound credible set that simultaneously enforces suboptimality constraints and enhances global exploration. Contribution/Results: We prove a sublinear regret bound. Experiments on synthetic and real-world tasks demonstrate consistent superiority over state-of-the-art methods, achieving up to a 3.2× improvement in query efficiency.

Technology Category

Application Category

📝 Abstract
Bilevel optimization is characterized by a two-level optimization structure, where the upper-level problem is constrained by optimal lower-level solutions, and such structures are prevalent in real-world problems. The constraint by optimal lower-level solutions poses significant challenges, especially in noisy, constrained, and derivative-free settings, as repeating lower-level optimizations is sample inefficient and predicted lower-level solutions may be suboptimal. We present BILevel Bayesian Optimization (BILBO), a novel Bayesian optimization algorithm for general bilevel problems with blackbox functions, which optimizes both upper- and lower-level problems simultaneously, without the repeated lower-level optimization required by existing methods. BILBO samples from confidence-bounds based trusted sets, which bounds the suboptimality on the lower level. Moreover, BILBO selects only one function query per iteration, where the function query selection strategy incorporates the uncertainty of estimated lower-level solutions and includes a conditional reassignment of the query to encourage exploration of the lower-level objective. The performance of BILBO is theoretically guaranteed with a sublinear regret bound for commonly used kernels and is empirically evaluated on several synthetic and real-world problems.
Problem

Research questions and friction points this paper is trying to address.

Optimizes bilevel problems efficiently
Reduces repeated lower-level optimizations
Handles noisy and derivative-free settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian optimization algorithm
simultaneous upper-lower optimization
confidence-bounds trusted sets
🔎 Similar Papers
No similar papers found.