🤖 AI Summary
Bayesian optimization (BO) suffers from poor scalability in high-dimensional black-box optimization, while neural network-based BO methods incur prohibitive computational costs due to expensive uncertainty estimation.
Method: This paper proposes a scalable neural network optimization framework that eliminates explicit uncertainty modeling. Its core innovation lies in decoupling exploration and exploitation strategies via an adaptive sampling-region control mechanism: a neural network approximates the objective function, and two independent sampling criteria—each governing exploration or exploitation—are applied within dynamically adjusted regions. Crucially, the method bypasses posterior inference entirely, operating outside the Bayesian paradigm.
Contribution/Results: Evaluated on benchmarks spanning 10–102 dimensions, the method reduces function evaluations by 40%–60% on average compared to four state-of-the-art baselines, achieves faster convergence, cuts runtime by over one order of magnitude, and attains superior optimal solutions.
📝 Abstract
Bayesian Optimization (BO) is a widely used approach for blackbox optimization that leverages a Gaussian process (GP) model and an acquisition function to guide future sampling. While effective in low-dimensional settings, BO faces scalability challenges in high-dimensional spaces and with large number of function evaluations due to the computational complexity of GP models. In contrast, neural networks (NNs) offer better scalability and can model complex functions, which led to the development of NN-based BO approaches. However, these methods typically rely on estimating model uncertainty in NN prediction -- a process that is often computationally intensive and complex, particularly in high dimensions. To address these limitations, a novel method, called scalable neural network-based blackbox optimization (SNBO), is proposed that does not rely on model uncertainty estimation. Specifically, SNBO adds new samples using separate criteria for exploration and exploitation, while adaptively controlling the sampling region to ensure efficient optimization. SNBO is evaluated on a range of optimization problems spanning from 10 to 102 dimensions and compared against four state-of-the-art baseline algorithms. Across the majority of test problems, SNBO attains function values better than the best-performing baseline algorithm, while requiring 40-60% fewer function evaluations and reducing the runtime by at least an order of magnitude.