On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether supervised fine-tuning (SFT) and reinforcement learning (RL) can be decoupled in the post-training of large language models. Through theoretical analysis and controlled experiments based on Qwen3-0.6B, the work provides the first rigorous evidence that SFT and RL are inherently coupled, regardless of their alternation order: applying RL after SFT increases the SFT loss, while performing SFT after RL reduces the reward score. The findings reveal a bidirectional performance degradation mechanism, demonstrating an intrinsic coupling between cross-entropy loss and reward signals. This insight offers both theoretical grounding and empirical support for the design of effective post-training strategies.

Technology Category

Application Category

📝 Abstract
Post-training of large language models routinely interleaves supervised fine-tuning (SFT) with reinforcement learning (RL). These two methods have different objectives: SFT minimizes the cross-entropy loss between model outputs and expert responses, while RL maximizes reward signals derived from human preferences or rule-based verifiers. Modern reasoning models have widely adopted the practice of alternating SFT and RL training. However, there is no theoretical account of whether they can be decoupled. We prove that decoupling is impossible in either order: (1) SFT-then-RL coupling: RL increases SFT loss under SFT optimality and (2) RL-then-SFT coupling: SFT lowers the reward achieved by RL. Experiments on Qwen3-0.6B confirm the predicted degradation, verifying that SFT and RL cannot be separated without loss of prior performance in the post-training
Problem

Research questions and friction points this paper is trying to address.

supervised fine-tuning
reinforcement learning
post-training
decoupling
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

supervised fine-tuning
reinforcement learning
post-training
non-decoupling
large language models
🔎 Similar Papers
No similar papers found.
Xueyan Niu
Xueyan Niu
Theory Lab, 2012 Labs, Huawei Technologies Co., Ltd.
information theorymachine learningcommunication
B
Bo Bai
Theory Laboratory, Central Research Institute, 2012 Laboratories, Huawei Technologies Co., Ltd.
Wei Han
Wei Han
Theory Lab, Central Research Institute, 2012 Labs, Huawei Technologies
Signal processingWireless communicationsWireless cachingInformation theory
W
Weixi Zhang
Theory Laboratory, Central Research Institute, 2012 Laboratories, Huawei Technologies Co., Ltd.