🤖 AI Summary
Existing theoretical analyses of self-supervised contrastive learning lack a systematic characterization of how data augmentation types influence learning performance.
Method: This paper introduces the first augmentation-aware theoretical framework, grounded in a semantic label hypothesis that explicitly models interpretable relationships between augmentation types and upper bounds on supervised risk.
Contributions/Results: (1) We derive the first augmentation-aware generalization error bound, revealing a fundamental trade-off between representation discriminability and invariance induced by augmentations. (2) We theoretically establish an explicit trade-off between augmentation strength and downstream discriminative performance. (3) Through pixel-level and representation-level experiments—evaluating canonical augmentations such as rotation and cropping—we quantitatively validate their impact on classification error; empirical results align closely with theoretical predictions.
📝 Abstract
Self-supervised contrastive learning has emerged as a powerful tool in machine learning and computer vision to learn meaningful representations from unlabeled data. Meanwhile, its empirical success has encouraged many theoretical studies to reveal the learning mechanisms. However, in the existing theoretical research, the role of data augmentation is still under-exploited, especially the effects of specific augmentation types. To fill in the blank, we for the first time propose an augmentation-aware error bound for self-supervised contrastive learning, showing that the supervised risk is bounded not only by the unsupervised risk, but also explicitly by a trade-off induced by data augmentation. Then, under a novel semantic label assumption, we discuss how certain augmentation methods affect the error bound. Lastly, we conduct both pixel- and representation-level experiments to verify our proposed theoretical results.