Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive language models suffer from high generation latency, while existing diffusion-based approaches—though enabling parallel decoding—face two critical bottlenecks: information loss and premature decision-making. This paper proposes Latent Refinement Decoding (LRD), a two-stage iterative optimization framework. First, it models global semantics via a mixture-of-distributions representation in latent space; second, it introduces a predictive feedback loop coupled with a dynamic KL-divergence-based convergence criterion to progressively refine belief states and enable early stopping. LRD overcomes fundamental limitations of both autoregressive and single-step diffusion paradigms, achieving a favorable trade-off between generation quality and efficiency. Empirically, LRD yields substantial improvements on code generation (HumanEval +6.3) and mathematical reasoning (MATH500 +3.8), with up to 10.6× speedup over baseline autoregressive inference.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) models remain the standard for natural language generation but still suffer from high latency due to strictly sequential decoding. Recent diffusion-inspired approaches, such as LlaDA and Dream, mitigate this by generating in parallel, yet they suffer from two core limitations: information loss, as predictive distributions for non-finalized tokens are discarded at each step, and premature commitment, where local decisions are made without sufficient global coordination. We introduce Latent Refinement Decoding (LRD), a two-stage framework with Latent Refinement and a Predictive Feedback Loop. The first stage maintains masked positions as distributional mixtures of predicted tokens and the mask embedding, allowing the model to establish more globally consistent beliefs. The second stage progressively finalizes confident tokens while retaining uncertain ones for iterative feedback. KL-divergence dynamics provide a principled and reliable criterion for convergence and early stopping. Experiments across coding (HumanEval +6.3, MBPP +2.6) and reasoning (GSM8K +2.9, MATH500 +3.8) show that LRD improves accuracy while delivering speedups of up to 10.6x, making it a strong and versatile alternative for parallel sequence generation.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in autoregressive language models
Addressing information loss in diffusion-based generation
Mitigating premature token commitment with global coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework with latent refinement and feedback loop
Maintains distributional mixtures for globally consistent beliefs
Uses KL-divergence dynamics for convergence and early stopping
🔎 Similar Papers
No similar papers found.
Q
Qinglin Zhu
King’s College London, UK
Y
Yizhen Yao
King’s College London, UK
Runcong Zhao
Runcong Zhao
Senior Research Scientist, King's College London
natural language processing
Y
Yanzheng Xiang
King’s College London, UK
A
Amrutha Saseendran
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, UK
C
Chen Jin
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, UK
P
Philip Alexander Teare
Centre for AI, Data Science & Artificial Intelligence, BioPharmaceuticals R&D, AstraZeneca, UK
B
Bin Liang
The Chinese University of Hong Kong, MoE Lab, CUHK
Yulan He
Yulan He
Professor, King's College London; Turing AI Fellow
Natural Language ProcessingLarge Language ModelsAI for education and health
Lin Gui
Lin Gui
Assistant Professor, King's College London
Natural Language ProcessingComputational Linguistic