🤖 AI Summary
Masked diffusion language models suffer from inefficient inference due to their bidirectional attention mechanism, which precludes key-value caching, and existing acceleration methods often compromise generation quality. This work introduces speculative decoding to this class of models for the first time, proposing a draft-verification dual-model architecture: a lightweight draft model generates multi-step token candidates in parallel, while a high-fidelity verification model validates the output in a single step. The approach substantially reduces inference steps without degrading generation quality, achieving a superior Pareto frontier between quality and efficiency on the MMLU and GSM8K benchmarks.
📝 Abstract
Masked Diffusion Models (MDMs) offer a promising alternative to autoregressive language models by enabling parallel token generation and bidirectional context modeling. However, their inference speed is significantly limited by the inability to cache key-value pairs due to bidirectional attention, requiring $O(N^2)$ computations at each generation step. While recent methods like FastDLLM and DkvCache improve inference speed through attention approximations and caching strategies, they achieve speedups at the cost of generation quality. We propose DualDiffusion, a speculative decoding framework for MDMs that combines fast drafter models (using efficient approximations) with slower, more accurate verifier models. By running multiple steps of a lightweight drafter followed by a single verification step, DualDiffusion achieves a superior Pareto frontier between generation steps and accuracy compared to existing approaches. We evaluate our method on MMLU and GSM8K, demonstrating that DualDiffusion maintains high accuracy while reducing the number of generation steps required, effectively pushing the quality-efficiency trade-off curve for masked diffusion language models.