Beyond Imitation: Recovering Dense Rewards from Demonstrations

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional view of supervised fine-tuning (SFT) as mere behavioral cloning, revealing its fundamental equivalence to inverse reinforcement learning (IRL) and its implicit learning of token-level dense reward models. To explicitly recover this fine-grained reward signal, we establish— for the first time—the theoretical connection between SFT and inverse Q-learning, propose a baseline-relative reward function design, and develop Dense-Path REINFORCE for policy iteration optimization. Experiments on instruction-following benchmarks demonstrate significant improvements over standard SFT, validating both the effectiveness and generalizability of implicit reward extraction. Our core contributions are threefold: (1) a theoretical proof of equivalence between SFT and IRL; (2) a methodological framework for interpretable, reusable dense reward recovery; and (3) a practical paradigm shift enabling fine-grained utilization of expert demonstrations.

Technology Category

Application Category

📝 Abstract
Conventionally, supervised fine-tuning (SFT) is treated as a simple imitation learning process that only trains a policy to imitate expert behavior on demonstration datasets. In this work, we challenge this view by establishing a fundamental equivalence between SFT and Inverse Reinforcement Learning. We prove that the SFT objective is a special case of Inverse Q-Learning, which implies that the SFT process does not just learn a policy, but also an implicit, dense, token-level reward model that explains the expert demonstrations. We then show how to recover this dense reward signal directly from the SFT model by formulating a baseline-relative reward function. The availability of such a dense reward model offers numerous benefits, providing granular credit assignment for each token generated. We demonstrate one key application by using these recovered rewards to further improve the policy with reinforcement learning. Our method, Dense-Path REINFORCE, consistently outperforms the original SFT models on instruction-following benchmarks. This work reframes SFT not merely as policy imitation but as a powerful reward learning mechanism, opening new possibilities for leveraging expert demonstrations.
Problem

Research questions and friction points this paper is trying to address.

Establishes equivalence between supervised fine-tuning and inverse reinforcement learning
Recovers implicit dense token-level rewards from supervised fine-tuning models
Demonstrates improved policy performance using recovered rewards for reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

SFT recovers dense token-level reward model
Baseline-relative reward function extracts implicit rewards
Dense-Path REINFORCE improves policy with recovered rewards
🔎 Similar Papers
J
Jiangnan Li
Department of Data Science and AI, Monash University
Thuy-Trang Vu
Thuy-Trang Vu
Monash University
Natural Language ProcessingMachine Learning
Ehsan Abbasnejad
Ehsan Abbasnejad
Assoc. Prof. Monash University
Machine learningResponsible machine learningVision and LanguageMachine ReasoningBayesian
G
Gholamreza Haffari
Department of Data Science and AI, Monash University