Beyond Instance Consistency: Investigating View Diversity in Self-supervised Learning

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the “instance consistency” assumption in self-supervised learning (SSL)—that different views of the same image necessarily share identical semantics—which fails on non-iconic data. We propose a novel perspective: view diversity, rather than strict consistency, enhances representation learning, and empirically find that *moderate* semantic divergence between views—not maximal divergence—yields optimal performance for downstream classification and dense prediction tasks. To quantify inter-view semantic distance, we introduce Earth Mover’s Distance (EMD) as a proxy estimator of mutual information. View diversity is then controllably modulated via multi-scale cropping with zero-overlap constraints. Extensive experiments across diverse benchmarks confirm the existence of an optimal diversity regime, enabling consistent and significant gains over conventional contrastive SSL baselines. Our findings establish a new paradigm for SSL, bridging theoretical insight—rethinking semantic alignment—with practical design principles for view generation.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) conventionally relies on the instance consistency paradigm, assuming that different views of the same image can be treated as positive pairs. However, this assumption breaks down for non-iconic data, where different views may contain distinct objects or semantic information. In this paper, we investigate the effectiveness of SSL when instance consistency is not guaranteed. Through extensive ablation studies, we demonstrate that SSL can still learn meaningful representations even when positive pairs lack strict instance consistency. Furthermore, our analysis further reveals that increasing view diversity, by enforcing zero overlapping or using smaller crop scales, can enhance downstream performance on classification and dense prediction tasks. However, excessive diversity is found to reduce effectiveness, suggesting an optimal range for view diversity. To quantify this, we adopt the Earth Mover's Distance (EMD) as an estimator to measure mutual information between views, finding that moderate EMD values correlate with improved SSL learning, providing insights for future SSL framework design. We validate our findings across a range of settings, highlighting their robustness and applicability on diverse data sources.
Problem

Research questions and friction points this paper is trying to address.

Investigating SSL effectiveness without instance consistency guarantee
Analyzing impact of view diversity on SSL downstream performance
Quantifying optimal view diversity using Earth Mover's Distance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adopts Earth Mover's Distance estimator
Enforces zero overlapping view diversity
Uses smaller crop scales enhancement
🔎 Similar Papers
No similar papers found.
Huaiyuan Qin
Huaiyuan Qin
Institute for Infocomm Research (I2R), A*STAR, Singapore
Computer VisionDeep Learning
Muli Yang
Muli Yang
Institute for Infocomm Research (I2R), A*STAR, Singapore
Computer VisionMachine LearningOpen-World LearningMultimodal Modeling
Siyuan Hu
Siyuan Hu
Unknown affiliation
Cognitive neuroscienceMRIAttention
P
Peng Hu
Sichuan University, China
Y
Yu Zhang
Southeast University, China
C
Chen Gong
Shanghai Jiaotong University, China
H
Hongyuan Zhu
Institute for Infocomm Research (I2R), A*STAR, Singapore