Cost-Sensitive Freeze-thaw Bayesian Optimization for Efficient Hyperparameter Tuning

๐Ÿ“… 2025-10-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the imbalance between computational cost and performance gain in hyperparameter optimization (HPO). To this end, we propose a cost-sensitive Bayesian optimization framework. Our method introduces: (1) a dynamic utility function that explicitly models user preferences for trade-offs between performance improvement and computational overhead; (2) a utility-based acquisition function and an automatic stopping criterion, enabling utility-driven sequential decision-making; and (3) transfer learning to enhance the surrogate model, coupled with dynamic resource allocation within a freeze-thaw multi-fidelity framework. Evaluated on standard multi-fidelity HPO benchmarks, our approach significantly outperforms state-of-the-art methods, achieving higher utility under identical computational budgets. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we address the problem of emph{cost-sensitive} hyperparameter optimization (HPO) built upon freeze-thaw Bayesian optimization (BO). Specifically, we assume a scenario where users want to early-stop the HPO process when the expected performance improvement is not satisfactory with respect to the additional computational cost. Motivated by this scenario, we introduce emph{utility} in the freeze-thaw framework, a function describing the trade-off between the cost and performance that can be estimated from the user's preference data. This utility function, combined with our novel acquisition function and stopping criterion, allows us to dynamically continue training the configuration that we expect to maximally improve the utility in the future, and also automatically stop the HPO process around the maximum utility. Further, we improve the sample efficiency of existing freeze-thaw methods with transfer learning to develop a specialized surrogate model for the cost-sensitive HPO problem. We validate our algorithm on established multi-fidelity HPO benchmarks and show that it outperforms all the previous freeze-thaw BO and transfer-BO baselines we consider, while achieving a significantly better trade-off between the cost and performance. Our code is publicly available at https://github.com/db-Lee/CFBO.
Problem

Research questions and friction points this paper is trying to address.

Optimizing hyperparameter tuning with cost-performance trade-offs
Developing automated stopping criteria for computational efficiency
Enhancing freeze-thaw Bayesian optimization through transfer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates utility function for cost-performance trade-off
Uses novel acquisition function and stopping criterion
Applies transfer learning to improve sample efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.