EvoSpeak: Large Language Models for Interpretable Genetic Programming-Evolved Heuristics

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Genetic programming (GP) often yields heuristic rules with poor interpretability, slow convergence, and limited cross-task transferability when applied to dynamic, large-scale optimization problems. To address these limitations, this paper proposes LLM-GP, a synergistic framework integrating large language models (LLMs) with GP. Specifically, LLMs generate semantically grounded warm-start populations for GP by automatically translating natural-language problem descriptions into executable heuristic rules; further, the framework enables symbolic reasoning–based knowledge transfer and user-preference–aligned rule evolution. Evaluated on dynamic flexible job-shop scheduling, LLM-GP significantly improves both solution quality and convergence speed in single- and multi-objective settings. Moreover, it produces human-readable decision-logic reports, enhancing transparency, practical utility, and cross-task adaptability—thereby bridging the gap between automated search and domain-informed, interpretable optimization.

Technology Category

Application Category

📝 Abstract
Genetic programming (GP) has demonstrated strong effectiveness in evolving tree-structured heuristics for complex optimization problems. Yet, in dynamic and large-scale scenarios, the most effective heuristics are often highly complex, hindering interpretability, slowing convergence, and limiting transferability across tasks. To address these challenges, we present EvoSpeak, a novel framework that integrates GP with large language models (LLMs) to enhance the efficiency, transparency, and adaptability of heuristic evolution. EvoSpeak learns from high-quality GP heuristics, extracts knowledge, and leverages this knowledge to (i) generate warm-start populations that accelerate convergence, (ii) translate opaque GP trees into concise natural-language explanations that foster interpretability and trust, and (iii) enable knowledge transfer and preference-aware heuristic generation across related tasks. We verify the effectiveness of EvoSpeak through extensive experiments on dynamic flexible job shop scheduling (DFJSS), under both single- and multi-objective formulations. The results demonstrate that EvoSpeak produces more effective heuristics, improves evolutionary efficiency, and delivers human-readable reports that enhance usability. By coupling the symbolic reasoning power of GP with the interpretative and generative strengths of LLMs, EvoSpeak advances the development of intelligent, transparent, and user-aligned heuristics for real-world optimization problems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of complex genetic programming heuristics
Accelerating convergence in dynamic large-scale optimization problems
Enabling knowledge transfer across related optimization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates genetic programming with large language models
Generates warm-start populations to accelerate convergence
Translates GP trees into natural-language explanations
🔎 Similar Papers
No similar papers found.
M
Meng Xu
Singapore Institute of Manufacturing Technology, Agency for Science, Technology and Research, Singapore
Jiao Liu
Jiao Liu
Research Fellow, College of Computing and Data Science, Nanyang Technological University
Evolutionary ComputationBayesian OptimizationMultiobjective OptimizationPhysics-Informed ML
Y
Yew Soon Ong
College of Computing and Data Science, Nanyang Technological University, and the Centre for Frontier AI Research, Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore