🤖 AI Summary
Traditional class-conditional generative models struggle to simultaneously achieve realism, class fidelity, and fairness in medical image synthesis—particularly for dermatoscopic images—often underrepresenting rare disease classes. To address this, we propose the Classification-Induced Diffusion Model (CIDM), the first framework to bidirectionally integrate a deep classifier into the diffusion process: forward classification guidance steers reverse sampling for enhanced class consistency, while backward classification gradients refine both generation quality and discriminative capability. CIDM jointly trains the diffusion model and classifier, incorporating a fine-grained class-conditioning mechanism. This yields substantial improvements in synthetic image realism, diversity, and rare-class recognition. Evaluated on multiple dermatoscopic datasets, CIDM boosts classification accuracy by 3.2–5.8% over baselines and effectively mitigates data imbalance. The approach establishes a new paradigm for fair and reliable AI-assisted skin cancer diagnosis.
📝 Abstract
Generative models, especially Diffusion Models, have demonstrated remarkable capability in generating high-quality synthetic data, including medical images. However, traditional class-conditioned generative models often struggle to generate images that accurately represent specific medical categories, limiting their usefulness for applications such as skin cancer diagnosis. To address this problem, we propose a classification-induced diffusion model, namely, Class-N-Diff, to simultaneously generate and classify dermoscopic images. Our Class-N-Diff model integrates a classifier within a diffusion model to guide image generation based on its class conditions. Thus, the model has better control over class-conditioned image synthesis, resulting in more realistic and diverse images. Additionally, the classifier demonstrates improved performance, highlighting its effectiveness for downstream diagnostic tasks. This unique integration in our Class-N-Diff makes it a robust tool for enhancing the quality and utility of diffusion model-based synthetic dermoscopic image generation. Our code is available at https://github.com/Munia03/Class-N-Diff.