🤖 AI Summary
Customizing efficient algorithms for NP-hard combinatorial optimization problems is time-consuming and labor-intensive, while existing frameworks lack cross-iteration state reuse and progressive optimization capabilities. This paper proposes a lightweight, extensible Python-based optimization framework that enables rapid development of domain-specific solvers atop general-purpose paradigms—including simulated annealing and branch-and-bound. Its core innovation is a novel continuous training mechanism: by persisting critical solver states and enabling incremental solution quality improvement, it facilitates knowledge transfer across iterations and accelerates convergence. Experimental evaluation in real-world Microsoft scenarios demonstrates a 3–5× speedup in algorithm customization time; solution quality improves steadily across iterations and converges reliably, significantly enhancing long-term solving efficacy.
📝 Abstract
Combinatorial optimization problems are prevalent across a wide variety of domains. These problems are often nuanced, their optimal solutions might not be efficiently obtainable, and they may require lots of time and compute resources to solve (they are NP-hard). It follows that the best course of action for solving these problems is to use general optimization algorithm paradigms to quickly and easily develop algorithms that are customized to these problems and can produce good solutions in a reasonable amount of time. In this paper, we present optimizn, a Python library for developing customized optimization algorithms under general optimization algorithm paradigms (simulated annealing, branch and bound). Additionally, optimizn offers continuous training, with which users can run their algorithms on a regular cadence, retain the salient aspects of previous runs, and use them in subsequent runs to potentially produce solutions that get closer and closer to optimality. An earlier version of this paper was peer reviewed and published internally at Microsoft.