🤖 AI Summary
Hyperparameter optimization for large language model (LLM) ensembling incurs prohibitive computational costs and hinders efficient evaluation of optimization algorithms.
Method: We propose a lightweight surrogate benchmark that defines a multidimensional hyperparameter search space, collects a small number of real ensembling experiments, and constructs a regression-based surrogate model to accurately predict ensemble performance across hyperparameter configurations.
Contribution/Results: Compared to direct tuning, our surrogate reduces evaluation cost by over two orders of magnitude while faithfully reproducing the convergence behavior and relative ranking of optimization algorithms. Experiments across multiple LLM ensembling tasks demonstrate high predictive accuracy (average MAE < 0.8%) and strong generalization. The benchmark provides an efficient, reproducible, and low-cost standardized testbed for developing, comparing, and deploying hyperparameter optimization algorithms for LLM ensembling.
📝 Abstract
Model merging techniques aim to integrate the abilities of multiple models into a single model. Most model merging techniques have hyperparameters, and their setting affects the performance of the merged model. Because several existing works show that tuning hyperparameters in model merging can enhance the merging outcome, developing hyperparameter optimization algorithms for model merging is a promising direction. However, its optimization process is computationally expensive, particularly in merging LLMs. In this work, we develop surrogate benchmarks for optimization of the merging hyperparameters to realize algorithm development and performance comparison at low cost. We define two search spaces and collect data samples to construct surrogate models to predict the performance of a merged model from a hyperparameter. We demonstrate that our benchmarks can predict the performance of merged models well and simulate optimization algorithm behaviors.