🤖 AI Summary
Instance-specific algorithm configuration (ISAC) for combinatorial optimization (CO) solvers suffers from high configuration latency (tens of seconds) due to reliance on hand-crafted feature extraction and explicit clustering, hindering rapid adaptation across diverse tasks.
Method: We propose the first end-to-end graph neural network (GNN)-driven ISAC framework that directly maps raw problem structures—e.g., constraint graphs—to optimal solver parameter configurations, eliminating manual feature engineering and explicit clustering. By integrating a differentiable solver interface and performing joint optimization, our approach enables sub-second configuration at inference time.
Contribution/Results: This work pioneers the use of GNNs in the ISAC *execution phase*, achieving efficient, generalizable instance-adaptive parameter prediction. On multiple CO benchmarks, it preserves solution quality (measured by time-to-solution, TTS) while significantly reducing total solving time (T_tot), demonstrating both computational efficiency and robust generalization.
📝 Abstract
Combinatorial optimization (CO) problems are pivotal across various industrial applications, where the speed of solving these problems is crucial. Improving the performance of CO solvers across diverse input instances requires fine-tuning solver parameters for each instance. However, this tuning process is time-consuming, and the time required increases with the number of instances. To address this, a method called instance-specific algorithm configuration (ISAC) has been devised. This approach involves two main steps: training and execution. During the training step, features are extracted from various instances and then grouped into clusters. For each cluster, parameters are fine-tuned. This cluster-specific tuning process results in a set of generalized parameters for instances belonging to each class. In the execution step, features are extracted from an unknown instance to determine its cluster, and the corresponding pre-tuned parameters are applied. Generally, the running time of a solver is evaluated by the time to solution ($TTS$). However, methods like ISAC require preprocessing. Therefore, the total execution time is $T_{tot}=TTS+T_{tune}$, where $T_{tune}$ represents the tuning time. While the goal is to minimize $T_{tot}$, it is important to note that extracting features in the ISAC method requires a certain amount of computational time. The extracting features include summary statistics of the solver execution logs, which takes several 10 seconds. This research presents a method to significantly reduce the time of the ISAC execution step by streamlining feature extraction and class determination with a graph neural network. Experimental results show that $T_{tune}$ in the execution step, which take several 10 seconds in the original ISAC manner, could be reduced to sub-seconds.