How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot Learning?

📅 2022-02-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically evaluates the effectiveness and robustness of self-supervised representation learning for cross-domain few-shot learning (CDFSL), aiming to mitigate deep models’ reliance on large-scale labeled data. We establish an objective evaluation framework comprising six distinct classifiers, covering mainstream self-supervised methods—including contrastive learning, rotation prediction, and jigsaw puzzles—and conduct experiments on standard CDFSL benchmarks (e.g., MiniImageNet → CUB). Key contributions include: (1) the first empirical finding that source-domain self-supervised performance is nearly uncorrelated with target-domain generalization; (2) the identification of prototypical classification as the optimal, highly generalizable evaluation protocol for CDFSL; (3) experimental validation that self-supervised methods match or surpass supervised state-of-the-art in shallow networks and low-data regimes, while significantly improving robustness; and (4) the observation that no single self-supervised method universally dominates across all cross-domain tasks.
📝 Abstract
Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in the area of computer vision, while self-supervised learning presents a promising solution. Both learning methods attempt to alleviate the dependency of deep networks on the requirement of large-scale labeled data. Although self-supervised methods have recently advanced dramatically, their utility on CDFSL is relatively unexplored. In this paper, we investigate the role of self-supervised representation learning in the context of CDFSL via a thorough evaluation of existing methods. It comes as a surprise that even with shallow architectures or small training datasets, self-supervised methods can perform favorably compared to the existing SOTA methods. Nevertheless, no single self-supervised approach dominates all datasets indicating that existing self-supervised methods are not universally applicable. In addition, we find that representations extracted from self-supervised methods exhibit stronger robustness than the supervised method. Intriguingly, whether self-supervised representations perform well on the source domain has little correlation with their applicability on the target domain. As part of our study, we conduct an objective measurement of the performance for six kinds of representative classifiers. The results suggest Prototypical Classifier as the standard evaluation recipe for CDFSL.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised Learning
Cross-domain Few-shot Learning
Stability and Effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised Learning
Cross-domain Few-shot Learning
Prototype Classifier
🔎 Similar Papers
No similar papers found.