🤖 AI Summary
Evolutionary algorithms (EAs) often suffer from premature convergence and low exploration efficiency due to stochastic selection and mutation operators. To address this, we propose an LLM-driven preference-aligned individual pairing mechanism—the first to integrate large language models into the selection phase of EAs. Our method constructs a multidimensional preference model jointly encoding genetic diversity, fitness, and crossover compatibility, and leverages prompt engineering and in-context learning for intelligent, context-aware pairing. Evaluated on TSP benchmarks, it significantly outperforms existing LLM-driven EA baselines: accelerating convergence, reducing optimality gaps, and enhancing population diversity—thereby mitigating premature convergence. The core contribution lies in replacing conventional stochastic or heuristic selection with an interpretable, controllable LLM-based preference modeling framework, enabling principled, adaptive selection grounded in semantic and structural compatibility.
📝 Abstract
Evolutionary Algorithms (EAs) employ random or simplistic selection methods, limiting their exploration of solution spaces and convergence to optimal solutions. The randomness in performing crossover or mutations may limit the model's ability to evolve efficiently. This paper introduces Preference-Aligned Individual Reciprocity (PAIR), a novel selection approach leveraging Large Language Models to emulate human-like mate selection, thereby introducing intelligence to the pairing process in EAs. PAIR prompts an LLM to evaluate individuals within a population based on genetic diversity, fitness level, and crossover compatibility, guiding more informed pairing decisions. We evaluated PAIR against a baseline method called LLM-driven EA (LMEA), published recently. Results indicate that PAIR significantly outperforms LMEA across various TSP instances, achieving lower optimality gaps and improved convergence. This performance is especially noticeable when combined with the flash thinking model, demonstrating increased population diversity to escape local optima. In general, PAIR provides a new strategy in the area of in-context learning for LLM-driven selection in EAs via sophisticated preference modelling, paving the way for improved solutions and further studies into LLM-guided optimization.