FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive language models (ARLMs) suffer from high latency in long-text generation due to sequential, single-step decoding, while discrete diffusion language models (DLMs), though parallelizable, require hundreds to thousands of sampling steps—compromising both efficiency and quality. To address this, we propose Few-Step Discrete Flow-Matching (FS-DFM), the first framework that explicitly treats the number of sampling steps as a controllable parameter. FS-DFM employs discrete flow matching to model token transitions, optimizes probability flow vectors, and incorporates teacher trajectory distillation to ensure consistent, stable, and high-quality text generation across arbitrary step counts. Experiments demonstrate that FS-DFM achieves comparable perplexity to a baseline 1024-step diffusion model using only 8 sampling steps. With identical parameter count, it delivers a 128× inference speedup, significantly improving throughput and end-to-end latency.

Technology Category

Application Category

📝 Abstract
Autoregressive language models (ARMs) deliver strong likelihoods, but are inherently serial: they generate one token per forward pass, which limits throughput and inflates latency for long sequences. Diffusion Language Models (DLMs) parallelize across positions and thus appear promising for language generation, yet standard discrete diffusion typically needs hundreds to thousands of model evaluations to reach high quality, trading serial depth for iterative breadth. We introduce FS-DFM, Few-Step Discrete Flow-Matching. A discrete flow-matching model designed for speed without sacrificing quality. The core idea is simple: make the number of sampling steps an explicit parameter and train the model to be consistent across step budgets, so one big move lands where many small moves would. We pair this with a reliable update rule that moves probability in the right direction without overshooting, and with strong teacher guidance distilled from long-run trajectories. Together, these choices make few-step sampling stable, accurate, and easy to control. On language modeling benchmarks, FS-DFM with 8 sampling steps achieves perplexity parity with a 1,024-step discrete-flow baseline for generating 1,024 tokens using a similar-size model, delivering up to 128 times faster sampling and corresponding latency/throughput gains.
Problem

Research questions and friction points this paper is trying to address.

Autoregressive models generate tokens serially, limiting throughput for long sequences
Standard diffusion models require hundreds of steps for high-quality generation
Few-step sampling needs to maintain quality while dramatically improving speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-step discrete flow-matching for speed
Consistent training across different step budgets
Reliable update rule with distilled teacher guidance
🔎 Similar Papers
No similar papers found.