🤖 AI Summary
This work proposes LLMize, a framework for complex numerical optimization problems where formalizing constraints and heuristics is challenging. By treating the optimization process as a black box, LLMize leverages large language models (LLMs) to generate candidate solutions in natural language, iteratively refining them through evaluations from an external objective function and feedback. Notably, it enables direct injection of constraints and domain knowledge via natural language—bypassing the need for mathematical programming or metaheuristic design—and thereby substantially lowers the barrier to tackling intricate optimization tasks. The framework integrates iterative prompting, in-context learning, OPRO, and a hybrid strategy inspired by evolutionary algorithms and simulated annealing. Empirical validation across diverse tasks—including convex optimization, linear programming, TSP, hyperparameter tuning, and nuclear fuel assembly layout—demonstrates its efficacy: while underperforming classical solvers on simple problems, it exhibits unique practical value in complex domains.
📝 Abstract
Large language models (LLMs) have recently shown strong reasoning capabilities beyond traditional language tasks, motivating their use for numerical optimization. This paper presents LLMize, an open-source Python framework that enables LLM-driven optimization through iterative prompting and in-context learning. LLMize formulates optimization as a black-box process in which candidate solutions are generated in natural language, evaluated by an external objective function, and refined over successive iterations using solution-score feedback. The framework supports multiple optimization strategies, including Optimization by Prompting (OPRO) and hybrid LLM-based methods inspired by evolutionary algorithms and simulated annealing. A key advantage of LLMize is the ability to inject constraints, rules, and domain knowledge directly through natural language descriptions, allowing practitioners to define complex optimization problems without requiring expertise in mathematical programming or metaheuristic design. LLMize is evaluated on convex optimization, linear programming, the Traveling Salesman Problem, neural network hyperparameter tuning, and nuclear fuel lattice optimization. Results show that while LLM-based optimization is not competitive with classical solvers for simple problems, it provides a practical and accessible approach for complex, domain-specific tasks where constraints and heuristics are difficult to formalize.