CCDM: Continuous Conditional Diffusion Models for Image Generation

📅 2024-05-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses high-dimensional image generation under continuous scalar conditions (e.g., regression targets), where existing continuous conditional GANs (CcGANs) suffer from training instability, and mainstream conditional diffusion models (CDMs) are ill-suited for continuous conditioning—exhibiting diffusion process mismatch, inefficient label embedding, and unstable sampling. To bridge this gap, we propose the first continuous conditional diffusion model (CCDM), specifically designed for scalar-conditioned generation. Its core innovations include: (i) a hard-neighborhood image denoising loss to enforce local consistency in condition space; (ii) a differentiable scalar label embedding module that preserves fine-grained semantic relationships; and (iii) a progressive conditional sampling algorithm ensuring stable, high-fidelity synthesis. Evaluated across four resolution scales (64×64 to 192×192), CCDM consistently outperforms CcGANs and other SOTA methods, establishing new state-of-the-art benchmarks. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Continuous Conditional Generative Modeling (CCGM) estimates high-dimensional data distributions, such as images, conditioned on scalar continuous variables (aka regression labels). While Continuous Conditional Generative Adversarial Networks (CcGANs) were designed for this task, their instability during adversarial learning often leads to suboptimal results. Conditional Diffusion Models (CDMs) offer a promising alternative, generating more realistic images, but their diffusion processes, label conditioning, and model fitting procedures are either not optimized for or incompatible with CCGM, making it difficult to integrate CcGANs' vicinal approach. To address these issues, we introduce Continuous Conditional Diffusion Models (CCDMs), the first CDM specifically tailored for CCGM. CCDMs address existing limitations with specially designed conditional diffusion processes, a novel hard vicinal image denoising loss, a customized label embedding method, and efficient conditional sampling procedures. Through comprehensive experiments on four datasets with resolutions ranging from 64x64 to 192x192, we demonstrate that CCDMs outperform state-of-the-art CCGM models, establishing a new benchmark. Ablation studies further validate the model design and implementation, highlighting that some widely used CDM implementations are ineffective for the CCGM task. Our code is publicly available at https://github.com/UBCDingXin/CCDM.
Problem

Research questions and friction points this paper is trying to address.

Optimizing image generation with continuous labels
Overcoming instability in adversarial learning models
Enhancing conditional diffusion for high-dimensional data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous Conditional Diffusion Models
Hard Vicinal Image Denoising Loss
Customized Label Embedding Method
🔎 Similar Papers
No similar papers found.
X
Xin Ding
School of Artificial Intelligence, Nanjing University of Information Science & Technology (NUIST), Nanjing, China
Yongwei Wang
Yongwei Wang
Zhejiang University
AI4MediaMultimedia ForensicsTrust Media
K
Kao Zhang
School of Artificial Intelligence, Nanjing University of Information Science & Technology (NUIST), Nanjing, China
Z. Jane Wang
Z. Jane Wang
Professor of Electrical and Computer Engineering Dept., University of British Columbia, Canada
Signal/Image/Video processingMachine LearningDigital media data analyticsdigital media security & forensicsbiomedical si