🤖 AI Summary
To address the significant performance degradation of segmentation models on fundus images across domains—primarily caused by discrepancies in imaging protocols—this paper proposes a source-free unsupervised domain adaptation framework. Methodologically, it innovatively integrates gradient-guided pseudo-label refinement, cosine-similarity-based contrastive learning, uncertainty-aware prototype feature modeling, and a corresponding contrastive loss to enhance class discriminability and cross-domain feature alignment. Extensive experiments on multiple cross-domain fundus image benchmarks (e.g., REFUGE→RIM-ONE, Drishti-GS→REFUGE) demonstrate that our approach consistently outperforms state-of-the-art methods: average Dice scores for optic disc and cup segmentation improve by 2.1–3.8%, boundary localization accuracy is markedly enhanced, and the model exhibits superior generalizability and robustness under domain shift.
📝 Abstract
Accurate segmentation of the optic disc and cup is critical for the early diagnosis and management of ocular diseases such as glaucoma. However, segmentation models trained on one dataset often suffer significant performance degradation when applied to target data acquired under different imaging protocols or conditions. To address this challenge, we propose extbf{Grad-CL}, a novel source-free domain adaptation framework that leverages a pre-trained source model and unlabeled target data to robustly adapt segmentation performance without requiring access to the original source data. Grad-CL combines a gradient-guided pseudolabel refinement module with a cosine similarity-based contrastive learning strategy. In the first stage, salient class-specific features are extracted via a gradient-based mechanism, enabling more accurate uncertainty quantification and robust prototype estimation for refining noisy pseudolabels. In the second stage, a contrastive loss based on cosine similarity is employed to explicitly enforce inter-class separability between the gradient-informed features of the optic cup and disc. Extensive experiments on challenging cross-domain fundus imaging datasets demonstrate that Grad-CL outperforms state-of-the-art unsupervised and source-free domain adaptation methods, achieving superior segmentation accuracy and improved boundary delineation. Project and code are available at https://visdomlab.github.io/GCL/.