Writing-RL: Advancing Long-form Writing via Adaptive Curriculum Reinforcement Learning

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of supervised fine-tuning (SFT) in long-text generation—namely, data saturation and teacher signal bias—this paper proposes an adaptive curriculum reinforcement learning framework. Methodologically, it introduces three key components: (1) marginal-aware data selection to dynamically identify high-potential training samples; (2) a pairwise comparison reward mechanism to mitigate scalar reward sparsity; and (3) dynamic reference scheduling to enable difficulty-progressive curriculum learning. Evaluated on a 7B-parameter model for long-text writing, the approach significantly outperforms strong SFT baselines. Unexpectedly, it also generalizes to long-input reasoning tasks—a first empirical demonstration that training for long output generation positively transfers to enhance long-context understanding. This finding establishes a novel paradigm for unifying long-input and long-output capability modeling within a single framework.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have enabled strong performance in long-form writing, yet existing supervised fine-tuning (SFT) approaches suffer from limitations such as data saturation and restricted learning capacity bounded by teacher signals. In this work, we present Writing-RL: an Adaptive Curriculum Reinforcement Learning framework to advance long-form writing capabilities beyond SFT. The framework consists of three key components: Margin-aware Data Selection strategy that prioritizes samples with high learning potential, Pairwise Comparison Reward mechanism that provides discriminative learning signals in the absence of verifiable rewards, and Dynamic Reference Scheduling approach, which plays a particularly critical role by adaptively adjusting task difficulty based on evolving model performance. Experiments on 7B-scale writer models show that our RL framework largely improves long-form writing performance over strong SFT baselines. Furthermore, we observe that models trained with long-output RL generalize surprisingly well to long-input reasoning tasks, potentially offering a promising perspective for rethinking long-context training.
Problem

Research questions and friction points this paper is trying to address.

Overcoming data saturation in supervised fine-tuning for long-form writing
Providing discriminative learning signals without verifiable rewards
Adaptively adjusting task difficulty based on model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Margin-aware Data Selection prioritizes high-potential samples
Pairwise Comparison Reward provides discriminative signals
Dynamic Reference Scheduling adjusts task difficulty adaptively
🔎 Similar Papers
No similar papers found.
Xuanyu Lei
Xuanyu Lei
Tsinghua University
Large Language ModelsAlignmentAgent
C
Chenliang Li
Tongyi Lab, Alibaba Group
Yuning Wu
Yuning Wu
Wayne State University
perceptions of crime & justicepolice attitudes and behaviorsvictimizationcriminological theorieslaw and society
Kaiming Liu
Kaiming Liu
Tsinghua University
LLMAutonomous Agent
Weizhou Shen
Weizhou Shen
Tongyi Lab, Alibaba Group
P
Peng Li
Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China
M
Ming Yan
Tongyi Lab, Alibaba Group
J
Ji Zhang
Tongyi Lab, Alibaba Group
F
Fei Huang
Tongyi Lab, Alibaba Group
Y
Yang Liu
Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China; Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China