🤖 AI Summary
To address the limitations of supervised fine-tuning (SFT) in long-text generation—namely, data saturation and teacher signal bias—this paper proposes an adaptive curriculum reinforcement learning framework. Methodologically, it introduces three key components: (1) marginal-aware data selection to dynamically identify high-potential training samples; (2) a pairwise comparison reward mechanism to mitigate scalar reward sparsity; and (3) dynamic reference scheduling to enable difficulty-progressive curriculum learning. Evaluated on a 7B-parameter model for long-text writing, the approach significantly outperforms strong SFT baselines. Unexpectedly, it also generalizes to long-input reasoning tasks—a first empirical demonstration that training for long output generation positively transfers to enhance long-context understanding. This finding establishes a novel paradigm for unifying long-input and long-output capability modeling within a single framework.
📝 Abstract
Recent advances in Large Language Models (LLMs) have enabled strong performance in long-form writing, yet existing supervised fine-tuning (SFT) approaches suffer from limitations such as data saturation and restricted learning capacity bounded by teacher signals. In this work, we present Writing-RL: an Adaptive Curriculum Reinforcement Learning framework to advance long-form writing capabilities beyond SFT. The framework consists of three key components: Margin-aware Data Selection strategy that prioritizes samples with high learning potential, Pairwise Comparison Reward mechanism that provides discriminative learning signals in the absence of verifiable rewards, and Dynamic Reference Scheduling approach, which plays a particularly critical role by adaptively adjusting task difficulty based on evolving model performance. Experiments on 7B-scale writer models show that our RL framework largely improves long-form writing performance over strong SFT baselines. Furthermore, we observe that models trained with long-output RL generalize surprisingly well to long-input reasoning tasks, potentially offering a promising perspective for rethinking long-context training.