🤖 AI Summary
This work addresses high-dimensional image generation under continuous scalar conditions (e.g., regression targets), where existing continuous conditional GANs (CcGANs) suffer from training instability, and mainstream conditional diffusion models (CDMs) are ill-suited for continuous conditioning—exhibiting diffusion process mismatch, inefficient label embedding, and unstable sampling. To bridge this gap, we propose the first continuous conditional diffusion model (CCDM), specifically designed for scalar-conditioned generation. Its core innovations include: (i) a hard-neighborhood image denoising loss to enforce local consistency in condition space; (ii) a differentiable scalar label embedding module that preserves fine-grained semantic relationships; and (iii) a progressive conditional sampling algorithm ensuring stable, high-fidelity synthesis. Evaluated across four resolution scales (64×64 to 192×192), CCDM consistently outperforms CcGANs and other SOTA methods, establishing new state-of-the-art benchmarks. The implementation is publicly available.
📝 Abstract
Continuous Conditional Generative Modeling (CCGM) estimates high-dimensional data distributions, such as images, conditioned on scalar continuous variables (aka regression labels). While Continuous Conditional Generative Adversarial Networks (CcGANs) were designed for this task, their instability during adversarial learning often leads to suboptimal results. Conditional Diffusion Models (CDMs) offer a promising alternative, generating more realistic images, but their diffusion processes, label conditioning, and model fitting procedures are either not optimized for or incompatible with CCGM, making it difficult to integrate CcGANs' vicinal approach. To address these issues, we introduce Continuous Conditional Diffusion Models (CCDMs), the first CDM specifically tailored for CCGM. CCDMs address existing limitations with specially designed conditional diffusion processes, a novel hard vicinal image denoising loss, a customized label embedding method, and efficient conditional sampling procedures. Through comprehensive experiments on four datasets with resolutions ranging from 64x64 to 192x192, we demonstrate that CCDMs outperform state-of-the-art CCGM models, establishing a new benchmark. Ablation studies further validate the model design and implementation, highlighting that some widely used CDM implementations are ineffective for the CCGM task. Our code is publicly available at https://github.com/UBCDingXin/CCDM.