ReDDiT: Rehashing Noise for Discrete Visual Generation

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing discrete diffusion models suffer from limited generation quality compared to their continuous counterparts, primarily due to a single absorbing state, rigid noise scheduling, and constrained sampling trajectories. To address these limitations, we propose the Rehashing Noise Framework, which introduces a novel stochastic multi-index corruption mechanism to expand the absorbing state space and designs an invertible rehashing sampler to overcome state collapse and trajectory monotonicity—two fundamental bottlenecks in discrete diffusion. Crucially, our method preserves full compatibility with standard Transformer architectures while substantially enhancing model expressivity and sampling diversity. Extensive experiments demonstrate that our approach reduces gFID from 6.18 to 1.61, achieving generation quality on par with state-of-the-art continuous diffusion models, alongside significant inference speedup. This work establishes a new paradigm for efficient, high-fidelity discrete generative modeling.

Technology Category

Application Category

📝 Abstract
Discrete diffusion models are gaining traction in the visual generative area for their efficiency and compatibility. However, the pioneered attempts still fall behind the continuous counterparts, which we attribute to the noise (absorbing state) design and sampling heuristics. In this study, we propose the rehashing noise framework for discrete diffusion transformer, termed ReDDiT, to extend absorbing states and improve expressive capacity of discrete diffusion models. ReDDiT enriches the potential paths that latent variables can traverse during training with randomized multi-index corruption. The derived rehash sampler, which reverses the randomized absorbing paths, guarantees the diversity and low discrepancy of the generation process. These reformulations lead to more consistent and competitive generation quality, mitigating the need for heavily tuned randomness. Experiments show that ReDDiT significantly outperforms the baseline (reducing gFID from 6.18 to 1.61) and is on par with the continuous counterparts with higher efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improving discrete diffusion models' expressive capacity
Enhancing generation diversity with rehashing noise
Bridging performance gap with continuous diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rehashing noise framework for discrete diffusion
Randomized multi-index corruption for training
Rehash sampler ensures diverse generation process
🔎 Similar Papers
No similar papers found.