Class-N-Diff: Classification-Induced Diffusion Model Can Make Fair Skin Cancer Diagnosis

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional class-conditional generative models struggle to simultaneously achieve realism, class fidelity, and fairness in medical image synthesis—particularly for dermatoscopic images—often underrepresenting rare disease classes. To address this, we propose the Classification-Induced Diffusion Model (CIDM), the first framework to bidirectionally integrate a deep classifier into the diffusion process: forward classification guidance steers reverse sampling for enhanced class consistency, while backward classification gradients refine both generation quality and discriminative capability. CIDM jointly trains the diffusion model and classifier, incorporating a fine-grained class-conditioning mechanism. This yields substantial improvements in synthetic image realism, diversity, and rare-class recognition. Evaluated on multiple dermatoscopic datasets, CIDM boosts classification accuracy by 3.2–5.8% over baselines and effectively mitigates data imbalance. The approach establishes a new paradigm for fair and reliable AI-assisted skin cancer diagnosis.

Technology Category

Application Category

📝 Abstract
Generative models, especially Diffusion Models, have demonstrated remarkable capability in generating high-quality synthetic data, including medical images. However, traditional class-conditioned generative models often struggle to generate images that accurately represent specific medical categories, limiting their usefulness for applications such as skin cancer diagnosis. To address this problem, we propose a classification-induced diffusion model, namely, Class-N-Diff, to simultaneously generate and classify dermoscopic images. Our Class-N-Diff model integrates a classifier within a diffusion model to guide image generation based on its class conditions. Thus, the model has better control over class-conditioned image synthesis, resulting in more realistic and diverse images. Additionally, the classifier demonstrates improved performance, highlighting its effectiveness for downstream diagnostic tasks. This unique integration in our Class-N-Diff makes it a robust tool for enhancing the quality and utility of diffusion model-based synthetic dermoscopic image generation. Our code is available at https://github.com/Munia03/Class-N-Diff.
Problem

Research questions and friction points this paper is trying to address.

Improving class-conditioned medical image generation accuracy
Enhancing skin cancer diagnosis through synthetic data
Integrating classification guidance into diffusion model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates classifier into diffusion model for guidance
Simultaneously generates and classifies dermoscopic images
Enhances control over class-conditioned image synthesis
🔎 Similar Papers
No similar papers found.
Nusrat Munia
Nusrat Munia
Graduate Student, University of Kentucky
Computer VisionMedical ImagingMultimodal dataFairness
A
Abdullah Imran
Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA