🤖 AI Summary
This work addresses the degradation in generation quality caused by reverse initialization bias and exposure bias in graph diffusion models. To mitigate these issues without altering the network architecture, the authors propose a novel approach that integrates a Langevin sampling algorithm aligned with the forward maximum perturbation distribution and a score-difference-based correction mechanism. These components operate synergistically to jointly alleviate both sources of bias. The method substantially enhances generation consistency and fidelity, achieving state-of-the-art performance across multiple graph diffusion models, benchmark datasets, and downstream tasks.
📝 Abstract
Most existing graph diffusion models have significant bias problems. We observe that the forward diffusion's maximum perturbation distribution in most models deviates from the standard Gaussian distribution, while reverse sampling consistently starts from a standard Gaussian distribution, which results in a reverse-starting bias. Together with the inherent exposure bias of diffusion models, this results in degraded generation quality. This paper proposes a comprehensive approach to mitigate both biases. To mitigate reverse-starting bias, we employ a newly designed Langevin sampling algorithm to align with the forward maximum perturbation distribution, establishing a new reverse-starting point. To address the exposure bias, we introduce a score correction mechanism based on a newly defined score difference. Our approach, which requires no network modifications, is validated across multiple models, datasets, and tasks, achieving state-of-the-art results.Code is at https://github.com/kunzhan/spp