π€ AI Summary
To address the limited performance of diffusion-based large language models (Diffusion LLMs) in complex mathematical and code reasoning, as well as their constrained sampling flexibility, this paper proposes TraceRLβthe first trajectory-aware reinforcement learning framework for Diffusion LLMs. Methodologically, it introduces three key innovations: (1) modeling the full reasoning trace as a training signal for the diffusion process; (2) incorporating a diffusion-based value model to enhance policy optimization stability; and (3) integrating curriculum learning and KV-caching to support chain-of-thought reasoning and block-length scaling. TraceRL is architecture-agnostic and seamlessly adapts to Diffusion LLMs of varying scales. Experiments demonstrate that TraDo-4B-Instruct outperforms 7B autoregressive baselines; TraDo-8B-Instruct achieves +6.1% and +51.3% absolute gains over Qwen2.5-7B-Instruct and Llama3.1-8B-Instruct on mathematical reasoning, respectively; and its long-chain variant yields an 18.1% relative improvement on MATH500.
π Abstract
We propose TraceRL, a trajectory-aware reinforcement learning framework for diffusion language models (DLMs) that incorporates preferred inference trajectory into post-training, and is applicable across different architectures. Equipped with a diffusion-based value model that enhances training stability, we demonstrate improved reasoning performance on complex math and coding tasks. Besides, it can also be applied to adapt block-specific models to larger blocks, which improves sampling flexibility. Employing TraceRL, we derive a series of state-of-the-art diffusion language models, namely TraDo. Although smaller than 7B-scale AR models, TraDo-4B-Instruct still consistently outperforms them across complex math reasoning tasks. TraDo-8B-Instruct achieves relative accuracy improvements of 6.1% over Qwen2.5-7B-Instruct and 51.3% over Llama3.1-8B-Instruct on mathematical reasoning benchmarks. Through curriculum learning, we also derive the first long-CoT DLM, outperforming Qwen2.5-7B-Instruct on MATH500 with an 18.1% relative accuracy gain. To facilitate reproducible research and practical applications, we release a comprehensive open-source framework for building, training, and deploying diffusion LLMs across diverse architectures. The framework integrates accelerated KV-cache techniques and inference engines for both inference and reinforcement learning, and includes implementations of various supervised fine-tuning and RL methods for mathematics, coding, and general tasks. Code and Models: https://github.com/Gen-Verse/dLLM-RL