🤖 AI Summary
This work investigates the theoretical connection between contrastive learning and domain adaptation, specifically for cross-domain analysis of medical imaging—namely, mammography. Method: We establish, for the first time, the theoretical equivalence between supervised contrastive loss (e.g., NT-Xent) and Class-level Mean Maximum Discrepancy (CMMD), proving that optimizing the former inherently promotes both domain alignment and class separability. Building on this insight, we propose the first theory-driven supervised contrastive domain adaptation framework, explicitly incorporating CMMD into the contrastive objective. Results: Extensive experiments on synthetic and real-world mammography datasets demonstrate that our method significantly outperforms baselines in cross-domain feature alignment, inter-class discriminability, and classification accuracy. This work provides a rigorous theoretical foundation and an effective practical paradigm for leveraging contrastive learning in medical domain adaptation.
📝 Abstract
This work studies the relationship between Contrastive Learning and Domain Adaptation from a theoretical perspective. The two standard contrastive losses, NT-Xent loss (Self-supervised) and Supervised Contrastive loss, are related to the Class-wise Mean Maximum Discrepancy (CMMD), a dissimilarity measure widely used for Domain Adaptation. Our work shows that minimizing the contrastive losses decreases the CMMD and simultaneously improves class-separability, laying the theoretical groundwork for the use of Contrastive Learning in the context of Domain Adaptation. Due to the relevance of Domain Adaptation in medical imaging, we focused the experiments on mammography images. Extensive experiments on three mammography datasets - synthetic patches, clinical (real) patches, and clinical (real) images - show improved Domain Adaptation, class-separability, and classification performance, when minimizing the Supervised Contrastive loss.