🤖 AI Summary
This work challenges the conventional view of supervised fine-tuning (SFT) as mere behavioral cloning, revealing its fundamental equivalence to inverse reinforcement learning (IRL) and its implicit learning of token-level dense reward models. To explicitly recover this fine-grained reward signal, we establish— for the first time—the theoretical connection between SFT and inverse Q-learning, propose a baseline-relative reward function design, and develop Dense-Path REINFORCE for policy iteration optimization. Experiments on instruction-following benchmarks demonstrate significant improvements over standard SFT, validating both the effectiveness and generalizability of implicit reward extraction. Our core contributions are threefold: (1) a theoretical proof of equivalence between SFT and IRL; (2) a methodological framework for interpretable, reusable dense reward recovery; and (3) a practical paradigm shift enabling fine-grained utilization of expert demonstrations.
📝 Abstract
Conventionally, supervised fine-tuning (SFT) is treated as a simple imitation learning process that only trains a policy to imitate expert behavior on demonstration datasets. In this work, we challenge this view by establishing a fundamental equivalence between SFT and Inverse Reinforcement Learning. We prove that the SFT objective is a special case of Inverse Q-Learning, which implies that the SFT process does not just learn a policy, but also an implicit, dense, token-level reward model that explains the expert demonstrations. We then show how to recover this dense reward signal directly from the SFT model by formulating a baseline-relative reward function. The availability of such a dense reward model offers numerous benefits, providing granular credit assignment for each token generated. We demonstrate one key application by using these recovered rewards to further improve the policy with reinforcement learning. Our method, Dense-Path REINFORCE, consistently outperforms the original SFT models on instruction-following benchmarks. This work reframes SFT not merely as policy imitation but as a powerful reward learning mechanism, opening new possibilities for leveraging expert demonstrations.