🤖 AI Summary
In discrete graph generation, existing diffusion and flow-matching models suffer from error accumulation during reverse denoising due to time-dependent noise processes—particularly under masked diffusion paradigms. To address this, we propose a time-independent iterative denoising framework grounded in the conditional independence assumption for iterative node/edge updates. We introduce, for the first time, a learnable Critic module that dynamically refines generation decisions based on data-distribution confidence scores. Our method integrates a graph-structure encoder, Critic-guided generation policy, and a flow-matching-inspired training objective. Evaluated on multiple benchmark tasks, our approach significantly outperforms state-of-the-art models—including MaskGIT and GraphDF—reducing FID by 32% while simultaneously improving generative novelty and structural fidelity.
📝 Abstract
Discrete Diffusion and Flow Matching models have significantly advanced generative modeling for discrete structures, including graphs. However, the time dependencies in the noising process of these models lead to error accumulation and propagation during the backward process. This issue, particularly pronounced in mask diffusion, is a known limitation in sequence modeling and, as we demonstrate, also impacts discrete diffusion models for graphs. To address this problem, we propose a novel framework called Iterative Denoising, which simplifies discrete diffusion and circumvents the issue by assuming conditional independence across time. Additionally, we enhance our model by incorporating a Critic, which during generation selectively retains or corrupts elements in an instance based on their likelihood under the data distribution. Our empirical evaluations demonstrate that the proposed method significantly outperforms existing discrete diffusion baselines in graph generation tasks.