🤖 AI Summary
This work addresses the challenge of posterior sampling in discrete diffusion models (DDMs) over discrete state spaces. We propose SG-DPS, a plug-and-play algorithm that integrates Split Gibbs sampling into the Diffusion Posterior Sampling (DPS) framework—the first such incorporation with theoretical convergence guarantees while preserving computational efficiency and practicality. SG-DPS unifies reward-guided generation and inverse problem solving without modifying pre-trained diffusion models. On synthetic benchmarks, it provably converges to the exact posterior; on diverse discrete-data tasks—including text, graph-structured data, and discrete image modeling—it achieves state-of-the-art performance, with sampling quality improvements of up to 2× over existing baselines. The core contribution is the establishment of the first discrete-domain DPS paradigm that simultaneously provides rigorous theoretical foundations and broad applicability.
📝 Abstract
We study the problem of posterior sampling in discrete-state spaces using discrete diffusion models. While posterior sampling methods for continuous diffusion models have achieved remarkable progress, analogous methods for discrete diffusion models remain challenging. In this work, we introduce a principled plug-and-play discrete diffusion posterior sampling algorithm based on split Gibbs sampling, which we call SG-DPS. Our algorithm enables reward-guided generation and solving inverse problems in discrete-state spaces. We demonstrate that SG-DPS converges to the true posterior distribution on synthetic benchmarks, and enjoys state-of-the-art posterior sampling performance on a range of benchmarks for discrete data, achieving up to 2x improved performance compared to existing baselines.