AdFair-CLIP: Adversarial Fair Contrastive Language-Image Pre-training for Chest X-rays

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
CLIP exhibits demographic biases—such as those related to race and gender—in medical image analysis, leading to unfair diagnostic outcomes and degraded performance for underrepresented subgroups. To address this, we propose FairCLIP, a fairness-enhanced contrastive language–image pretraining framework tailored for chest X-ray analysis. Its core innovation is a sensitive-attribute adversarial intervention mechanism that explicitly disentangles sensitive features within the joint language–image embedding space, thereby mitigating spurious correlations. FairCLIP integrates contrastive learning, cross-modal alignment, and decorrelation constraints to support fair zero-shot and few-shot generalization. Evaluated on a multi-center chest X-ray dataset, FairCLIP achieves significant improvements across all demographic subgroups: average classification accuracy increases by 3.2%, and Equalized Odds difference decreases by 58%. Crucially, it preserves strong generalization capability. This work establishes a new benchmark for fairness-aware learning in medical vision foundation models.

Technology Category

Application Category

📝 Abstract
Contrastive Language-Image Pre-training (CLIP) models have demonstrated superior performance across various visual tasks including medical image classification. However, fairness concerns, including demographic biases, have received limited attention for CLIP models. This oversight leads to critical issues, particularly those related to race and gender, resulting in disparities in diagnostic outcomes and reduced reliability for underrepresented groups. To address these challenges, we introduce AdFair-CLIP, a novel framework employing adversarial feature intervention to suppress sensitive attributes, thereby mitigating spurious correlations and improving prediction fairness. We conduct comprehensive experiments on chest X-ray (CXR) datasets, and show that AdFair-CLIP significantly enhances both fairness and diagnostic accuracy, while maintaining robust generalization in zero-shot and few-shot scenarios. These results establish new benchmarks for fairness-aware learning in CLIP-based medical diagnostic models, particularly for CXR analysis.
Problem

Research questions and friction points this paper is trying to address.

Address demographic biases in CLIP models for medical imaging
Mitigate race and gender disparities in diagnostic outcomes
Improve fairness and accuracy in chest X-ray analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial feature intervention to suppress biases
Improves fairness in CLIP-based medical diagnostics
Maintains accuracy in zero-shot and few-shot scenarios
🔎 Similar Papers
No similar papers found.
C
Chenlang Yi
Texas A&M University, College Station, TX, USA
Z
Zizhan Xiong
Texas A&M University, College Station, TX, USA
Q
Qi Qi
The University of Iowa, Iowa City, IA, USA
X
Xiyuan Wei
Texas A&M University, College Station, TX, USA
G
Girish Bathla
Mayo Clinic School of Medicine, Rochester, MN, USA
C
Ching-Long Lin
The University of Iowa, Iowa City, IA, USA
B
Bobak Jack Mortazavi
Texas A&M University, College Station, TX, USA
Tianbao Yang
Tianbao Yang
Texas A&M University
machine learningstochastic optimization