Retrospex: Language Agent Meets Offline Reinforcement Learning Critic

📅 2025-05-17
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Existing LLM-based agents struggle to effectively leverage historical experience for improved decision-making. To address this, we propose the first LLM agent framework integrated with an offline reinforcement learning (RL) critic. Our method performs deep retrospective analysis of historical interaction data to construct a dynamic action re-scoring mechanism: without explicitly injecting past experiences into the context window, it adaptively fuses the language model’s output probability distribution with the critic’s estimated action-value scores, enabling decoupled yet synergistic integration of linguistic knowledge and experiential value. This design circumvents the context-length limitations and generalization bottlenecks inherent in conventional experience replay. Evaluated on three high-interactivity benchmarks—ScienceWorld, ALFWorld, and WebShop—our framework significantly outperforms strong baselines, especially in long-horizon reasoning and multi-step tasks, where success rates improve markedly. These results validate a novel paradigm for optimizing LLM agents via offline RL.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM’s context. Instead, it combines the LLM’s action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline “retrospection” process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong baselines.
Problem

Research questions and friction points this paper is trying to address.

LLM agents lack utilization of past experiences for improvement
Retrospex integrates LLM action likelihood with RL Critic values
Dynamic action rescoring enhances experience-based task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLM action likelihood with RL Critic values
Uses offline retrospection to train RL Critic
Dynamic action rescoring prioritizes experience-based values
🔎 Similar Papers
No similar papers found.
Y
Yufei Xiang
State Key Laboratory for Novel Software Technology, Nanjing University, School of Artificial Intelligence, Nanjing University, Nanjing, China
Y
Yiqun Shen
State Key Laboratory for Novel Software Technology, Nanjing University, School of Artificial Intelligence, Nanjing University, Nanjing, China
Yeqin Zhang
Yeqin Zhang
Nanjing University
Information RetrievalAgentLarge Language Model
Cam-Tu Nguyen
Cam-Tu Nguyen
Associate Professor of AI School, Nanjing University, China
Data MiningImage AnnotationText MiningMachine LearningGraphical Models