🤖 AI Summary
To address the source-free multi-source domain adaptation (MSFDA) challenge in multi-center medical imaging diagnosis—where access to original multi-source data is prohibited due to privacy constraints and models must generalize across heterogeneous devices and institutions—this paper proposes, for the first time, an uncertainty-aware adaptive knowledge distillation framework. Our method jointly models model-level initialization and instance-level pseudo-label-guided dynamic adaptation: high-confidence pseudo-labels are selected via uncertainty estimation, and dual-level (model-level + instance-level) knowledge distillation enables self-supervised adaptation under source-free conditions. Evaluated on two multi-center medical imaging benchmarks, our approach achieves over 5.2% average accuracy improvement over prior single-source source-free domain adaptation methods, effectively overcoming their generalization bottlenecks in multi-source settings. The implementation code is publicly available.
📝 Abstract
Source-free domain adaptation (SFDA) alleviates the domain discrepancy among data obtained from domains without accessing the data for the awareness of data privacy. However, existing conventional SFDA methods face inherent limitations in medical contexts, where medical data are typically collected from multiple institutions using various equipment. To address this problem, we propose a simple yet effective method, named Uncertainty-aware Adaptive Distillation (UAD) for the multi-source-free unsupervised domain adaptation (MSFDA) setting. UAD aims to perform well-calibrated knowledge distillation from (i) model level to deliver coordinated and reliable base model initialisation and (ii) instance level via model adaptation guided by high-quality pseudo-labels, thereby obtaining a high-performance target domain model. To verify its general applicability, we evaluate UAD on two image-based diagnosis benchmarks among two multi-centre datasets, where our method shows a significant performance gain compared with existing works. The code is available at https://github.com/YXSong000/UAD.