Anchored Diffusion Language Model

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) suffer from inaccurate reconstruction and inferior likelihood estimation and text quality compared to autoregressive (AR) models, primarily because key tokens are prematurely masked during the forward diffusion process. To address this, we propose a two-stage anchored framework: an anchor network first identifies the distribution of salient tokens, then guides the conditional diffusion process to prioritize precise reconstruction at these critical positions. This work introduces the “anchoring” mechanism—novel for DLMs—to enhance sample efficiency and likelihood quality, and derives the Anchored Negative Evidence Lower Bound (ANELBO) as the principled optimization objective. Experiments demonstrate: up to 25.4% perplexity reduction on LM1B and OpenWebText; state-of-the-art performance on seven zero-shot benchmarks; MAUVE score surpassing AR models for the first time, marking a breakthrough in human-preference-aligned text generation; and transferable gains—anchoring also improves AR model performance and enhances mathematical and logical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Diffusion Language Models (DLMs) promise parallel generation and bidirectional context, yet they underperform autoregressive (AR) models in both likelihood modeling and generated text quality. We identify that this performance gap arises when important tokens (e.g., key words or low-frequency words that anchor a sentence) are masked early in the forward process, limiting contextual information for accurate reconstruction. To address this, we introduce the Anchored Diffusion Language Model (ADLM), a novel two-stage framework that first predicts distributions over important tokens via an anchor network, and then predicts the likelihoods of missing tokens conditioned on the anchored predictions. ADLM significantly improves test perplexity on LM1B and OpenWebText, achieving up to 25.4% gains over prior DLMs, and narrows the gap with strong AR baselines. It also achieves state-of-the-art performance in zero-shot generalization across seven benchmarks and surpasses AR models in MAUVE score, which marks the first time a DLM generates better human-like text than an AR model. Theoretically, we derive an Anchored Negative Evidence Lower Bound (ANELBO) objective and show that anchoring improves sample complexity and likelihood modeling. Beyond diffusion, anchoring boosts performance in AR models and enhances reasoning in math and logic tasks, outperforming existing chain-of-thought approaches
Problem

Research questions and friction points this paper is trying to address.

DLMs underperform AR models in likelihood and text quality
Important tokens masked early limit contextual information for reconstruction
ADLM improves DLM performance and narrows gap with AR models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework with anchor network
Predicts important tokens first
Improves likelihood and text quality
🔎 Similar Papers
No similar papers found.
Litu Rout
Litu Rout
University of Texas at Austin
Machine LearningGenerative ModelingSamplingOptimization
C
C. Caramanis
The University of Texas at Austin
S
Sanjay Shakkottai
The University of Texas at Austin