Accelerating Diffusion LLMs via Adaptive Parallel Decoding

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based large language models (dLLMs) suffer from slow parallel decoding and difficulty in balancing inference speed with generation quality. Method: We propose Adaptive Parallel Decoding (APD), a novel framework that dynamically adjusts the parallel sampling step size and—uniquely—performs multiplicative probability fusion between dLLM marginal probabilities and outputs from a lightweight autoregressive auxiliary model. APD departs from conventional speculative decoding paradigms, introduces three tunable hyperparameters to flexibly trade off throughput and quality, and incorporates KV cache optimization and masked input truncation for efficiency. Results: On standard benchmarks, APD achieves 92–96% of the throughput of state-of-the-art autoregressive models while preserving generation quality with negligible degradation (<0.3 BLEU/ROUGE points), substantially outperforming existing dLLM decoding methods.

Technology Category

Application Category

📝 Abstract
The generation speed of LLMs are bottlenecked by autoregressive decoding, where tokens are predicted sequentially one by one. Alternatively, diffusion large language models (dLLMs) theoretically allow for parallel token generation, but in practice struggle to achieve the speed of autoregressive models without significantly sacrificing quality. We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel. We achieve this by defining a multiplicative mixture between the dLLM marginal probabilities and the joint probability of sequences under a small auxiliary autoregressive model. This inverts the standard setup of speculative decoding, where the goal is to sample from a large autoregressive verifier by drafting from a smaller model. We further optimize APD by enabling KV caching and limiting the size of the masked input. Altogether, our method puts forward three tunable parameters to flexibly tradeoff throughput and quality. We show that APD provides markedly higher throughput with minimal quality degradations on downstream benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving speed of diffusion LLMs via parallel decoding
Balancing parallel token generation with quality maintenance
Optimizing throughput-quality tradeoff with adaptive parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive parallel decoding for dynamic token sampling
Multiplicative mixture of dLLM and autoregressive probabilities
Optimized KV caching and masked input size
🔎 Similar Papers
No similar papers found.