LLMPC: Large Language Model Predictive Control

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit uncontrollable implicit optimization behaviors and suffer from inaccurate planning difficulty assessment in complex task planning. Method: We propose an explicit planning paradigm grounded in Model Predictive Control (MPC), establishing for the first time a theoretical correspondence between LLM-based planning and MPC. Our approach introduces a differentiable planning cost function, a multi-granularity action evaluator, and a cost-driven, plug-and-play mechanism that replaces conventional black-box prompt engineering. Integrated with structured reasoning prompts, it enables explicit, interpretable, and controllable planning. Contribution/Results: Evaluated on multiple complex task planning benchmarks, our method achieves a +12.7% improvement in task success rate and enhanced robustness. Empirical results demonstrate that cost guidance effectively calibrates LLMs’ implicit optimization processes, offering a novel, principled paradigm for controllable and interpretable LLM planning.

Technology Category

Application Category

📝 Abstract
Recent advancements in prompting techniques for Large Language Models (LLMs) have improved their reasoning, planning, and action abilities. This paper examines these prompting techniques through the lens of model predictive control (MPC). We show that LLMs act as implicit planning cost function minimizers when planning prompts are used. Under our framework we demonstrate that LLM planning performance can be improved further by incorporating real planning cost functions and evaluators.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Model Predictive Control
Planning Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Prompt Engineering
Planning Performance Enhancement
🔎 Similar Papers
No similar papers found.