🤖 AI Summary
To address weak cross-domain generalization, hyperparameter sensitivity, and poor calibration of neural networks under covariate shift, this paper proposes a parameter-free, end-to-end domain adaptation framework. Methodologically, it introduces two key innovations: (i) the first integration of Sinkhorn divergence into a dynamic domain alignment module, enabling unsupervised, plug-and-play target-domain alignment without manual tuning; and (ii) the first empirical discovery that the group order (e.g., dihedral group (D_N)) of equivariant neural networks (ENNs) correlates positively with domain adaptation performance—leveraging this insight to enhance structural priors. The approach significantly improves target-domain classification accuracy (up to ~40% gain), calibration quality (ECE and Brier score improvements exceeding one order of magnitude), and maintains compatibility with diverse backbone architectures. Notably, it substantially boosts both generalization and calibration capabilities of ENNs.
📝 Abstract
Modern neural networks (NNs) often do not generalize well in the presence of a"covariate shift"; that is, in situations where the training and test data distributions differ, but the conditional distribution of classification labels remains unchanged. In such cases, NN generalization can be reduced to a problem of learning more domain-invariant features. Domain adaptation (DA) methods include a range of techniques aimed at achieving this; however, these methods have struggled with the need for extensive hyperparameter tuning, which then incurs significant computational costs. In this work, we introduce SIDDA, an out-of-the-box DA training algorithm built upon the Sinkhorn divergence, that can achieve effective domain alignment with minimal hyperparameter tuning and computational overhead. We demonstrate the efficacy of our method on multiple simulated and real datasets of varying complexity, including simple shapes, handwritten digits, and real astronomical observations. SIDDA is compatible with a variety of NN architectures, and it works particularly well in improving classification accuracy and model calibration when paired with equivariant neural networks (ENNs). We find that SIDDA enhances the generalization capabilities of NNs, achieving up to a $approx40%$ improvement in classification accuracy on unlabeled target data. We also study the efficacy of DA on ENNs with respect to the varying group orders of the dihedral group $D_N$, and find that the model performance improves as the degree of equivariance increases. Finally, we find that SIDDA enhances model calibration on both source and target data--achieving over an order of magnitude improvement in the ECE and Brier score. SIDDA's versatility, combined with its automated approach to domain alignment, has the potential to advance multi-dataset studies by enabling the development of highly generalizable models.