PAIR: A Novel Large Language Model-Guided Selection Strategy for Evolutionary Algorithms

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evolutionary algorithms (EAs) often suffer from premature convergence and low exploration efficiency due to stochastic selection and mutation operators. To address this, we propose an LLM-driven preference-aligned individual pairing mechanism—the first to integrate large language models into the selection phase of EAs. Our method constructs a multidimensional preference model jointly encoding genetic diversity, fitness, and crossover compatibility, and leverages prompt engineering and in-context learning for intelligent, context-aware pairing. Evaluated on TSP benchmarks, it significantly outperforms existing LLM-driven EA baselines: accelerating convergence, reducing optimality gaps, and enhancing population diversity—thereby mitigating premature convergence. The core contribution lies in replacing conventional stochastic or heuristic selection with an interpretable, controllable LLM-based preference modeling framework, enabling principled, adaptive selection grounded in semantic and structural compatibility.

Technology Category

Application Category

📝 Abstract
Evolutionary Algorithms (EAs) employ random or simplistic selection methods, limiting their exploration of solution spaces and convergence to optimal solutions. The randomness in performing crossover or mutations may limit the model's ability to evolve efficiently. This paper introduces Preference-Aligned Individual Reciprocity (PAIR), a novel selection approach leveraging Large Language Models to emulate human-like mate selection, thereby introducing intelligence to the pairing process in EAs. PAIR prompts an LLM to evaluate individuals within a population based on genetic diversity, fitness level, and crossover compatibility, guiding more informed pairing decisions. We evaluated PAIR against a baseline method called LLM-driven EA (LMEA), published recently. Results indicate that PAIR significantly outperforms LMEA across various TSP instances, achieving lower optimality gaps and improved convergence. This performance is especially noticeable when combined with the flash thinking model, demonstrating increased population diversity to escape local optima. In general, PAIR provides a new strategy in the area of in-context learning for LLM-driven selection in EAs via sophisticated preference modelling, paving the way for improved solutions and further studies into LLM-guided optimization.
Problem

Research questions and friction points this paper is trying to address.

Enhances Evolutionary Algorithms with intelligent selection using Large Language Models.
Improves solution space exploration and convergence to optimal solutions.
Introduces PAIR for better pairing decisions based on genetic diversity and fitness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided selection for evolutionary algorithms
Preference-Aligned Individual Reciprocity (PAIR) strategy
Enhanced pairing via genetic diversity and fitness
🔎 Similar Papers
No similar papers found.
Shady Ali
Shady Ali
Student
Machine LearningDeep LearningData ScienceNatural Language Processing
M
Mahmoud Ashraf
Egypt University of Informatics
S
Seif Hegazy
Egypt University of Informatics
F
Fatty Salem
Egypt University of Informatics
H
Hoda Mokhtar
Egypt University of Informatics
Mohamed Medhat Gaber
Mohamed Medhat Gaber
Adjunct Professor, Queensland University of Technology
Artificial IntelligenceData MiningData Stream MiningMachine LearningRandom Forests
M
M. Alrefaie
Egypt University of Informatics, Premio.AI