Data-Efficient Multi-Agent Spatial Planning with LLMs

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multi-agent taxi dispatch in graph-structured road networks, aiming to achieve low-data-dependency and robust spatial decision-making—minimizing total passenger waiting time—by leveraging the world knowledge embedded in pre-trained large language models (LLMs). Method: We propose the first multi-agent planning framework integrating LLM zero-shot reasoning with one-at-a-time forward rollout, enhanced by semantic-aware prompt engineering, lightweight fine-tuning, and structured road-network modeling. Contribution/Results: Our method surpasses existing SOTA using only 1/50 of environment interactions. It achieves strong zero-shot performance, with prompt effectiveness markedly improved upon incorporating readily available contextual information. Crucially, the LLM adapts to dynamic environmental changes via pure natural-language prompts. The core contribution is the empirical validation that LLMs’ intrinsic spatial commonsense knowledge enables effective and generalizable coordination in multi-agent decision-making, even under sparse supervision and structural constraints.

Technology Category

Application Category

📝 Abstract
In this project, our goal is to determine how to leverage the world-knowledge of pretrained large language models for efficient and robust learning in multiagent decision making. We examine this in a taxi routing and assignment problem where agents must decide how to best pick up passengers in order to minimize overall waiting time. While this problem is situated on a graphical road network, we show that with the proper prompting zero-shot performance is quite strong on this task. Furthermore, with limited fine-tuning along with the one-at-a-time rollout algorithm for look ahead, LLMs can out-compete existing approaches with 50 times fewer environmental interactions. We also explore the benefits of various linguistic prompting approaches and show that including certain easy-to-compute information in the prompt significantly improves performance. Finally, we highlight the LLM's built-in semantic understanding, showing its ability to adapt to environmental factors through simple prompts.
Problem

Research questions and friction points this paper is trying to address.

Leverage LLMs for multi-agent decision making
Optimize taxi routing to minimize waiting time
Enhance performance with linguistic prompting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage pretrained LLMs for decision making
Use zero-shot and fine-tuning techniques
Incorporate linguistic prompts for efficiency
🔎 Similar Papers
No similar papers found.