DualDiffusion: A Speculative Decoding Strategy for Masked Diffusion Models

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Masked diffusion language models suffer from inefficient inference due to their bidirectional attention mechanism, which precludes key-value caching, and existing acceleration methods often compromise generation quality. This work introduces speculative decoding to this class of models for the first time, proposing a draft-verification dual-model architecture: a lightweight draft model generates multi-step token candidates in parallel, while a high-fidelity verification model validates the output in a single step. The approach substantially reduces inference steps without degrading generation quality, achieving a superior Pareto frontier between quality and efficiency on the MMLU and GSM8K benchmarks.
📝 Abstract
Masked Diffusion Models (MDMs) offer a promising alternative to autoregressive language models by enabling parallel token generation and bidirectional context modeling. However, their inference speed is significantly limited by the inability to cache key-value pairs due to bidirectional attention, requiring $O(N^2)$ computations at each generation step. While recent methods like FastDLLM and DkvCache improve inference speed through attention approximations and caching strategies, they achieve speedups at the cost of generation quality. We propose DualDiffusion, a speculative decoding framework for MDMs that combines fast drafter models (using efficient approximations) with slower, more accurate verifier models. By running multiple steps of a lightweight drafter followed by a single verification step, DualDiffusion achieves a superior Pareto frontier between generation steps and accuracy compared to existing approaches. We evaluate our method on MMLU and GSM8K, demonstrating that DualDiffusion maintains high accuracy while reducing the number of generation steps required, effectively pushing the quality-efficiency trade-off curve for masked diffusion language models.
Problem

Research questions and friction points this paper is trying to address.

Masked Diffusion Models
inference speed
bidirectional attention
generation quality
key-value caching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding
Masked Diffusion Models
Drafter-Verifier Framework
Inference Acceleration
Bidirectional Context Modeling
🔎 Similar Papers
No similar papers found.
Satyam Goyal
Satyam Goyal
University of Michigan, Ann Arbor
Generative AIArtificial IntelligenceDeep Learning
K
Kushal Patel
Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI, USA
T
Tanush Mittal
Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI, USA
A
Arjun Laxman
Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI, USA