🤖 AI Summary
Existing discrete diffusion models suffer from limited generation quality compared to their continuous counterparts, primarily due to a single absorbing state, rigid noise scheduling, and constrained sampling trajectories. To address these limitations, we propose the Rehashing Noise Framework, which introduces a novel stochastic multi-index corruption mechanism to expand the absorbing state space and designs an invertible rehashing sampler to overcome state collapse and trajectory monotonicity—two fundamental bottlenecks in discrete diffusion. Crucially, our method preserves full compatibility with standard Transformer architectures while substantially enhancing model expressivity and sampling diversity. Extensive experiments demonstrate that our approach reduces gFID from 6.18 to 1.61, achieving generation quality on par with state-of-the-art continuous diffusion models, alongside significant inference speedup. This work establishes a new paradigm for efficient, high-fidelity discrete generative modeling.
📝 Abstract
Discrete diffusion models are gaining traction in the visual generative area for their efficiency and compatibility. However, the pioneered attempts still fall behind the continuous counterparts, which we attribute to the noise (absorbing state) design and sampling heuristics. In this study, we propose the rehashing noise framework for discrete diffusion transformer, termed ReDDiT, to extend absorbing states and improve expressive capacity of discrete diffusion models. ReDDiT enriches the potential paths that latent variables can traverse during training with randomized multi-index corruption. The derived rehash sampler, which reverses the randomized absorbing paths, guarantees the diversity and low discrepancy of the generation process. These reformulations lead to more consistent and competitive generation quality, mitigating the need for heavily tuned randomness. Experiments show that ReDDiT significantly outperforms the baseline (reducing gFID from 6.18 to 1.61) and is on par with the continuous counterparts with higher efficiency.