🤖 AI Summary
In combinatorial optimization (CO), conventional methods yielding a single optimal solution often fail to meet real-world demands for solution diversity. Method: This paper proposes an unsupervised learning framework that generates multiple high-quality, diverse solutions from a single training run. Its core innovation lies in the first-ever coupling of continuous tensor relaxation with an annealing mechanism, enabling shared latent-space representation and implicit parallel exploration of multiple objectives—thereby eliminating the need for repeated solving. Contribution/Results: The method simultaneously produces solution sets exhibiting both penalty-based and structural diversity, supporting post-hoc user selection while preserving solution quality. Evaluated on multiple standard CO benchmarks, it achieves superior coverage and solution quality in a single inference pass, with inference speed accelerated by several-fold over existing unsupervised solvers.
📝 Abstract
Finding the best solution is a common objective in combinatorial optimization (CO). In practice, directly handling constraints is often challenging, incorporating them into the objective function as the penalties. However, balancing these penalties to achieve the desired solution is time-consuming. Additionally, formulated objective functions and constraints often only approximate real-world scenarios, where the optimal solution is not necessarily the best solution for the original real-world problem. One solution is to obtain (i) penalty-diversified solutions with varying penalty strengths for the former issue and (ii) variation-diversified solutions with different characteristics for the latter issue. Users can then post-select the desired solution from these diverse solutions. However, efficiently finding these diverse solutions is more difficult than identifying one. This study introduces Continual Tensor Relaxation Annealing (CTRA) for unsupervised-learning (UL)-based CO solvers, a computationally efficient framework for finding these diverse solutions in a single training run. The key idea is to leverage representation learning capability to automatically and efficiently learn common representations and parallelization. Numerical experiments show that CTRA enables UL-based solvers to find these diverse solutions much faster than repeatedly running existing UL-based solvers.