🤖 AI Summary
Large language models (LLMs) exhibit limited capability in solving multi-objective optimization problems, satisfying strict constraints, and navigating large-scale solution spaces. Method: This paper proposes a general LLM-driven genetic algorithm (GA) framework that synergistically integrates LLMs’ semantic understanding with GA’s global search capability. The framework comprises seven core components, wherein the LLM generates high-quality initial populations, guides mutation and crossover operations, and dynamically refines the search process via fitness-based feedback. Contribution/Results: Through systematic ablation studies, key performance drivers are identified. Evaluated on three representative complex problem classes using four state-of-the-art LLMs, the framework achieves significant improvements in solution quality, constraint satisfaction rate, and computational efficiency. It establishes a scalable, principled methodology for integrating LLMs with evolutionary algorithms.
📝 Abstract
While Large Language Models (LLMs) have demonstrated impressive abilities across various domains, they still struggle with complex problems characterized by multi-objective optimization, precise constraint satisfaction, immense solution spaces, etc. To address the limitation, drawing on the superior semantic understanding ability of LLMs and also the outstanding global search and optimization capability of genetic algorithms, we propose to capitalize on their respective strengths and introduce Lyria, a general LLM-driven genetic algorithm framework, comprising 7 essential components. Through conducting extensive experiments with 4 LLMs across 3 types of problems, we demonstrated the efficacy of Lyria. Additionally, with 7 additional ablation experiments, we further systematically analyzed and elucidated the factors that affect its performance.