Improved Diffusion-based Generative Model with Better Adversarial Robustness

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion probabilistic models (DPMs) and consistency models (CMs) suffer from inherent distribution mismatch between training and sampling, degrading both generation quality and robustness. This work provides the first theoretical analysis revealing their shared mismatch mechanism, proving that adversarial training is equivalent to distributionally robust optimization (DRO). Building on this insight, we propose a unified DRO framework that jointly enhances robustness and generation performance for both DPMs and their CM-distilled variants. Methodologically, we integrate adversarial training into the denoising process to explicitly mitigate input distribution shift during sampling. Experiments demonstrate substantial improvements in robustness against input perturbations, while maintaining or improving standard metrics—including FID and LPIPS—across multiple benchmarks. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Diffusion Probabilistic Models (DPMs) have achieved significant success in generative tasks. However, their training and sampling processes suffer from the issue of distribution mismatch. During the denoising process, the input data distributions differ between the training and inference stages, potentially leading to inaccurate data generation. To obviate this, we analyze the training objective of DPMs and theoretically demonstrate that this mismatch can be alleviated through Distributionally Robust Optimization (DRO), which is equivalent to performing robustness-driven Adversarial Training (AT) on DPMs. Furthermore, for the recently proposed Consistency Model (CM), which distills the inference process of the DPM, we prove that its training objective also encounters the mismatch issue. Fortunately, this issue can be mitigated by AT as well. Based on these insights, we propose to conduct efficient AT on both DPM and CM. Finally, extensive empirical studies validate the effectiveness of AT in diffusion-based models. The code is available at https://github.com/kugwzk/AT_Diff.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution mismatch in DPMs
Enhances adversarial robustness in generative models
Proposes Adversarial Training for DPM and CM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Training enhances DPMs
Distributionally Robust Optimization applied
Consistency Model improves inference
🔎 Similar Papers
No similar papers found.