Label Calibration in Source Free Domain Adaptation

๐Ÿ“… 2025-01-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the severe noise and low reliability of pseudo-labels in unsupervised source-free domain adaptation (SFDA), this paper proposes a pseudo-label refinement framework integrating evidential deep learning with Softmax calibration. It models predictive uncertainty via a Dirichlet prior and introduces a single-forward estimation mechanism to mitigate label noise bias induced by Softmaxโ€™s translation invariance. Additionally, an information maximization loss is incorporated to enhance discriminability on the target domain. This work is the first to jointly leverage evidential learning and Softmax calibration for SFDA, significantly improving pseudo-label quality and model generalization. Extensive experiments on multiple benchmark datasets demonstrate that the proposed method consistently outperforms existing state-of-the-art approaches, achieving average classification accuracy gains of 2.3โ€“4.7 percentage points.

Technology Category

Application Category

๐Ÿ“ Abstract
Source-free domain adaptation (SFDA) utilizes a pre-trained source model with unlabeled target data. Self-supervised SFDA techniques generate pseudolabels from the pre-trained source model, but these pseudolabels often contain noise due to domain discrepancies between the source and target domains. Traditional self-supervised SFDA techniques rely on deterministic model predictions using the softmax function, leading to unreliable pseudolabels. In this work, we propose to introduce predictive uncertainty and softmax calibration for pseudolabel refinement using evidential deep learning. The Dirichlet prior is placed over the output of the target network to capture uncertainty using evidence with a single forward pass. Furthermore, softmax calibration solves the translation invariance problem to assist in learning with noisy labels. We incorporate a combination of evidential deep learning loss and information maximization loss with calibrated softmax in both prior and non-prior target knowledge SFDA settings. Extensive experimental analysis shows that our method outperforms other state-of-the-art methods on benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised Domain Adaptation
Pseudo Label Accuracy
Pre-trained Model Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty Estimation
Evidence-based Deep Learning
Robust Pseudo-labeling
๐Ÿ”Ž Similar Papers
No similar papers found.