🤖 AI Summary
This work addresses the challenges of high-dimensional constrained Bayesian optimization, where the curse of dimensionality and premature contraction of traditional trust-region methods hinder performance. The authors propose the Local Constrained Bayesian Optimization (LCBO) framework, which alternates between rapid local descent and uncertainty-driven exploration on a differentiable surrogate model incorporating constraint penalties. LCBO is the first method to provide theoretical guarantees for high-dimensional constrained settings, achieving a polynomial-in-dimension convergence rate in terms of Karush–Kuhn–Tucker (KKT) residual—thereby overcoming the exponential regret bounds inherent to global approaches. Empirical evaluations on benchmarks up to 100 dimensions demonstrate that LCBO significantly outperforms state-of-the-art methods, exhibiting superior optimization performance and stability.
📝 Abstract
Bayesian optimization (BO) for high-dimensional constrained problems remains a significant challenge due to the curse of dimensionality. We propose Local Constrained Bayesian Optimization (LCBO), a novel framework tailored for such settings. Unlike trust-region methods that are prone to premature shrinking when confronting tight or complex constraints, LCBO leverages the differentiable landscape of constraint-penalized surrogates to alternate between rapid local descent and uncertainty-driven exploration. Theoretically, we prove that LCBO achieves a convergence rate for the Karush-Kuhn-Tucker (KKT) residual that depends polynomially on the dimension $d$ for common kernels under mild assumptions, offering a rigorous alternative to global BO where regret bounds typically scale exponentially. Extensive evaluations on high-dimensional benchmarks (up to 100D) demonstrate that LCBO consistently outperforms state-of-the-art baselines.