A Survey on the Optimization of Large Language Model-based Agents

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limitations of large language model (LLM)-based agents in long-horizon planning, dynamic interaction, and complex decision-making within intricate environments. Methodologically, it introduces the first systematic optimization survey, proposing a unified classification framework that dichotomizes optimization strategies into parameter-driven approaches (e.g., supervised fine-tuning, PPO, DPO) and parameter-agnostic techniques (e.g., prompt engineering, retrieval-augmented generation, reward shaping, trajectory construction). It further analyzes critical integrative aspects—such as hybrid optimization—and synthesizes evaluation benchmarks and representative applications. Contributions include: (1) a structured, comprehensive review encompassing over 100 works; (2) an open-source, standardized reference library hosted on GitHub; and (3) a clear articulation of open challenges and actionable research directions. Collectively, this work provides both theoretical foundations and a reproducible toolchain for efficient LLM agent optimization.

Technology Category

Application Category

📝 Abstract
With the rapid development of Large Language Models (LLMs), LLM-based agents have been widely adopted in various fields, becoming essential for autonomous decision-making and interactive tasks. However, current work typically relies on prompt design or fine-tuning strategies applied to vanilla LLMs, which often leads to limited effectiveness or suboptimal performance in complex agent-related environments. Although LLM optimization techniques can improve model performance across many general tasks, they lack specialized optimization towards critical agent functionalities such as long-term planning, dynamic environmental interaction, and complex decision-making. Although numerous recent studies have explored various strategies to optimize LLM-based agents for complex agent tasks, a systematic review summarizing and comparing these methods from a holistic perspective is still lacking. In this survey, we provide a comprehensive review of LLM-based agent optimization approaches, categorizing them into parameter-driven and parameter-free methods. We first focus on parameter-driven optimization, covering fine-tuning-based optimization, reinforcement learning-based optimization, and hybrid strategies, analyzing key aspects such as trajectory data construction, fine-tuning techniques, reward function design, and optimization algorithms. Additionally, we briefly discuss parameter-free strategies that optimize agent behavior through prompt engineering and external knowledge retrieval. Finally, we summarize the datasets and benchmarks used for evaluation and tuning, review key applications of LLM-based agents, and discuss major challenges and promising future directions. Our repository for related references is available at https://github.com/YoungDubbyDu/LLM-Agent-Optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM-based agents for complex tasks
Addressing limitations in long-term planning and decision-making
Systematically reviewing and categorizing optimization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-driven optimization methods for LLM-based agents
Parameter-free strategies using prompt engineering
Review of datasets and benchmarks for agent evaluation
🔎 Similar Papers
2023-08-22Frontiers Comput. Sci.Citations: 866
S
Shangheng Du
Shanghai Institute of Artificial Intelligence for Education, East China Normal University; School of Computer Science and Technology, East China Normal University, China
J
Jiabao Zhao
School of Computer Science and Technology, Donghua University, China
Jinxin Shi
Jinxin Shi
East China Normal Unversity
Z
Zhentao Xie
School of Computer Science and Technology, East China Normal University, China
X
Xin Jiang
School of Computer Science and Technology, East China Normal University, China
Y
Yanhong Bai
Shanghai Institute of Artificial Intelligence for Education, East China Normal University; School of Computer Science and Technology, East China Normal University, China
L
Liang He
School of Computer Science and Technology, East China Normal University, China