๐ค AI Summary
This work addresses the challenge of sparse rewards in multi-step reasoning tasks, where reinforcement learning struggles to optimize due to the lack of informative feedback and the common assumption that trajectories are independent sequences, thereby ignoring implicit dependencies among critical reasoning steps. To overcome these limitations, the authors propose T-STAR, a tree-structured self-correction framework that integrates multiple reasoning trajectories into a cognitive tree. At key divergence points, T-STAR generates refined paths through context-aware thought grafting and employs an introspective value mechanism grounded in the BradleyโTerry model to enable trajectory-level reward backpropagation and precise, surgical policy updates. Experimental results demonstrate that T-STAR significantly outperforms strong baselines across diverse reasoning and planning benchmarks, with particularly pronounced gains on tasks requiring long reasoning chains.
๐ Abstract
Reinforcement learning for Large Language Model agents is often hindered by sparse rewards in multi-step reasoning tasks. Existing approaches like Group Relative Policy Optimization treat sampled trajectories as independent chains, assigning uniform credit to all steps in each chain and ignoring the existence of critical steps that may disproportionally impact reasoning outcome. In this paper, we propose T-STAR(Tree-structured Self-Taught Agent Rectification), a framework that recovers the latent correlated reward structure across seemingly independent trajectories. Specifically, we consolidate trajectories into a unified Cognitive Tree by identifying and merging functionally similar steps/nodes. It enables an Introspective Valuation mechanism that back-propagates trajectory-level rewards through the tree to obtain a new notion of variance-reduced relative advantage at step-level. Using the Cognitive Tree, we also develop In-Context Thought Grafting to synthesize corrective reasoning by contrasting successful and failed branches at critical divergence points/steps. Our proposed Surgical Policy Optimization then capitalizes on the rich policy gradient information concentrated at these critical points/steps through a Bradley-Terry type of surgical loss. Extensive experiments across embodied, interactive, reasoning, and planning benchmarks demonstrate that T-STAR achieves consistent improvements over strong baselines, with gains most pronounced on tasks requiring extended reasoning chains.