Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion LLMs hold promise for parallel decoding but suffer from slow inference and degraded quality due to the absence of KV caching and dependency violations caused by synchronous multi-token generation. This work proposes a training-free acceleration framework: (1) a novel block-wise approximate KV cache tailored for bidirectional diffusion models, enabling inter-block cache reuse; and (2) a confidence-aware dynamic parallel decoding strategy that mitigates context mismatch arising from the conditional independence assumption. The method requires no fine-tuning and significantly improves throughput and decoding stability. Evaluated on LLaDA and Dream models, it achieves up to 27.6× higher throughput over baseline diffusion LLMs, with negligible accuracy degradation and performance approaching that of autoregressive models.

Technology Category

Application Category

📝 Abstract
Diffusion-based large language models (Diffusion LLMs) have shown promise for non-autoregressive text generation with parallel decoding capabilities. However, the practical inference speed of open-sourced Diffusion LLMs often lags behind autoregressive models due to the lack of Key-Value (KV) Cache and quality degradation when decoding multiple tokens simultaneously. To bridge this gap, we introduce a novel block-wise approximate KV Cache mechanism tailored for bidirectional diffusion models, enabling cache reuse with negligible performance drop. Additionally, we identify the root cause of generation quality degradation in parallel decoding as the disruption of token dependencies under the conditional independence assumption. To address this, we propose a confidence-aware parallel decoding strategy that selectively decodes tokens exceeding a confidence threshold, mitigating dependency violations and maintaining generation quality. Experimental results on LLaDA and Dream models across multiple LLM benchmarks demonstrate up to extbf{27.6$ imes$ throughput} improvement with minimal accuracy loss, closing the performance gap with autoregressive models and paving the way for practical deployment of Diffusion LLMs.
Problem

Research questions and friction points this paper is trying to address.

Lack of KV Cache in Diffusion LLMs slows inference speed
Quality degradation in parallel decoding due to token dependency disruption
Performance gap between Diffusion and autoregressive LLMs needs reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block-wise KV Cache for bidirectional diffusion models
Confidence-aware parallel decoding strategy
27.6x throughput improvement with minimal accuracy loss
🔎 Similar Papers
No similar papers found.