🤖 AI Summary
Existing LLM-based agents struggle to effectively leverage historical experience for improved decision-making. To address this, we propose the first LLM agent framework integrated with an offline reinforcement learning (RL) critic. Our method performs deep retrospective analysis of historical interaction data to construct a dynamic action re-scoring mechanism: without explicitly injecting past experiences into the context window, it adaptively fuses the language model’s output probability distribution with the critic’s estimated action-value scores, enabling decoupled yet synergistic integration of linguistic knowledge and experiential value. This design circumvents the context-length limitations and generalization bottlenecks inherent in conventional experience replay. Evaluated on three high-interactivity benchmarks—ScienceWorld, ALFWorld, and WebShop—our framework significantly outperforms strong baselines, especially in long-horizon reasoning and multi-step tasks, where success rates improve markedly. These results validate a novel paradigm for optimizing LLM agents via offline RL.
📝 Abstract
Large language models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM’s context. Instead, it combines the LLM’s action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline “retrospection” process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong baselines.