🤖 AI Summary
Traditional particle swarm optimization (PSO) struggles with structured sequence solutions—e.g., mathematical expressions or program code—due to its inherent reliance on continuous, numeric representations.
Method: This paper introduces the first native integration of large language models (LLMs) into the PSO framework: particle “velocity” is encoded as natural-language prompts, guiding the LLM to generate syntactically and semantically valid, structure-constrained candidate solutions. The approach synergistically combines prompt engineering, structured output constraints (e.g., grammar-guided decoding), and domain-specific fine-tuning.
Contribution/Results: Evaluated on traveling salesman problems (TSP), heuristic enhancement, and symbolic regression, the method significantly outperforms standard PSO—particularly in generating high-quality, executable structured sequences. It establishes a novel paradigm of LLM-augmented swarm intelligence and empirically validates LLMs as general-purpose structured optimizers capable of navigating complex discrete search spaces while preserving semantic fidelity.
📝 Abstract
Optimization problems often require domain-specific expertise to design problem-dependent methodologies. Recently, several approaches have gained attention by integrating large language models (LLMs) into genetic algorithms. Building on this trend, we introduce Language Model Particle Swarm Optimization (LMPSO), a novel method that incorporates an LLM into the swarm intelligence framework of Particle Swarm Optimization (PSO). In LMPSO, the velocity of each particle is represented as a prompt that generates the next candidate solution, leveraging the capabilities of an LLM to produce solutions in accordance with the PSO paradigm. This integration enables an LLM-driven search process that adheres to the foundational principles of PSO. The proposed LMPSO approach is evaluated across multiple problem domains, including the Traveling Salesman Problem (TSP), heuristic improvement for TSP, and symbolic regression. These problems are traditionally challenging for standard PSO due to the structured nature of their solutions. Experimental results demonstrate that LMPSO is particularly effective for solving problems where solutions are represented as structured sequences, such as mathematical expressions or programmatic constructs. By incorporating LLMs into the PSO framework, LMPSO establishes a new direction in swarm intelligence research. This method not only broadens the applicability of PSO to previously intractable problems but also showcases the potential of LLMs in addressing complex optimization challenges.