Large Language Models as Particle Swarm Optimizers

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional particle swarm optimization (PSO) struggles with structured sequence solutions—e.g., mathematical expressions or program code—due to its inherent reliance on continuous, numeric representations. Method: This paper introduces the first native integration of large language models (LLMs) into the PSO framework: particle “velocity” is encoded as natural-language prompts, guiding the LLM to generate syntactically and semantically valid, structure-constrained candidate solutions. The approach synergistically combines prompt engineering, structured output constraints (e.g., grammar-guided decoding), and domain-specific fine-tuning. Contribution/Results: Evaluated on traveling salesman problems (TSP), heuristic enhancement, and symbolic regression, the method significantly outperforms standard PSO—particularly in generating high-quality, executable structured sequences. It establishes a novel paradigm of LLM-augmented swarm intelligence and empirically validates LLMs as general-purpose structured optimizers capable of navigating complex discrete search spaces while preserving semantic fidelity.

Technology Category

Application Category

📝 Abstract
Optimization problems often require domain-specific expertise to design problem-dependent methodologies. Recently, several approaches have gained attention by integrating large language models (LLMs) into genetic algorithms. Building on this trend, we introduce Language Model Particle Swarm Optimization (LMPSO), a novel method that incorporates an LLM into the swarm intelligence framework of Particle Swarm Optimization (PSO). In LMPSO, the velocity of each particle is represented as a prompt that generates the next candidate solution, leveraging the capabilities of an LLM to produce solutions in accordance with the PSO paradigm. This integration enables an LLM-driven search process that adheres to the foundational principles of PSO. The proposed LMPSO approach is evaluated across multiple problem domains, including the Traveling Salesman Problem (TSP), heuristic improvement for TSP, and symbolic regression. These problems are traditionally challenging for standard PSO due to the structured nature of their solutions. Experimental results demonstrate that LMPSO is particularly effective for solving problems where solutions are represented as structured sequences, such as mathematical expressions or programmatic constructs. By incorporating LLMs into the PSO framework, LMPSO establishes a new direction in swarm intelligence research. This method not only broadens the applicability of PSO to previously intractable problems but also showcases the potential of LLMs in addressing complex optimization challenges.
Problem

Research questions and friction points this paper is trying to address.

Integrating LLMs into Particle Swarm Optimization for structured problems
Solving optimization challenges like TSP and symbolic regression
Enhancing PSO with LLMs to handle sequence-based solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM into Particle Swarm Optimization
Uses LLM prompts to generate candidate solutions
Solves structured sequence problems effectively
🔎 Similar Papers
No similar papers found.
Y
Yamato Shinohara
Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
J
Jinglue Xu
Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
T
Tianshui Li
Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
Hitoshi Iba
Hitoshi Iba
University of Tokyo
Artificial intelligenceEvolutionary sytemsComplex systems