Rethinking Positive Pairs in Contrastive Learning

📅 2024-10-23
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing contrastive learning methods define similarity exclusively over semantically consistent sample pairs, overlooking latent structural similarities inherent in semantically dissimilar pairs. To address this limitation, we propose SimLAP—a novel framework that redefines positive pairs not as semantically identical samples but as learnable, discriminative subspace-aligned pairs. SimLAP jointly optimizes pairwise similarity estimation and subspace projection within an end-to-end training paradigm, incorporating both contrastive loss and explicit subspace alignment constraints. By uncovering and leveraging structural similarities among inter-class samples residing in shared latent subspaces, SimLAP breaks from conventional similarity modeling paradigms. Extensive experiments across multiple benchmarks demonstrate its effectiveness: SimLAP significantly improves few-shot transfer performance and model robustness under distribution shifts. Moreover, it offers a new perspective on unsupervised similarity learning—shifting focus from semantic identity to geometric consistency in learned representation subspaces.

Technology Category

Application Category

📝 Abstract
The training methods in AI do involve semantically distinct pairs of samples. However, their role typically is to enhance the between class separability. The actual notion of similarity is normally learned from semantically identical pairs. This paper presents SimLAP: a simple framework for learning visual representation from arbitrary pairs. SimLAP explores the possibility of learning similarity from semantically distinct sample pairs. The approach is motivated by the observation that for any pair of classes there exists a subspace in which semantically distinct samples exhibit similarity. This phenomenon can be exploited for a novel method of learning, which optimises the similarity of an arbitrary pair of samples, while simultaneously learning the enabling subspace. The feasibility of the approach will be demonstrated experimentally and its merits discussed.
Problem

Research questions and friction points this paper is trying to address.

Learning similarity from semantically distinct sample pairs
Exploring subspaces where distinct samples exhibit similarity
Optimizing similarity for arbitrary pairs while learning subspaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning similarity from distinct sample pairs
Optimizing similarity via enabling subspace learning
Exploiting inter-class similarity for representation learning
🔎 Similar Papers
No similar papers found.
Jiantao Wu
Jiantao Wu
Researcher, Adaptemy
Knowledge GraphsSemantic WebMachine Learning
S
Shentong Mo
Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, Pennsylvania, USA
Z
Zhenhua Feng
Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu, P. R. China
S
Sara Atito
Surrey Institute for People-Centred AI, GU2 7XH Surrey, UK; Centre For Vision, Speech and Signal Processing (CVSSP), GU2 7XH Surrey, UK
J
Josef Kitler
Centre For Vision, Speech and Signal Processing (CVSSP), GU2 7XH Surrey, UK
M
Muhammad Awais
Surrey Institute for People-Centred AI, GU2 7XH Surrey, UK; Centre For Vision, Speech and Signal Processing (CVSSP), GU2 7XH Surrey, UK