🤖 AI Summary
Vision Transformers (ViTs) suffer from overfitting in few-shot learning due to their lack of spatial inductive bias. Method: We propose ViT-SOM, the first end-to-end trainable ViT–Self-Organizing Map (SOM) co-architecture, which explicitly incorporates SOM’s topology-preserving prior to compensate for ViT’s structural limitations. Specifically, we embed a differentiable SOM module into the ViT encoder as a topological regularizer and design a joint contrastive and topology-aware loss to jointly optimize structural consistency and discriminability of learned representations. Contribution/Results: Unlike conventional implicit regularization methods, ViT-SOM requires no external pretraining or CNN-based knowledge distillation. On multiple small-scale image benchmarks, it significantly improves unsupervised representation quality, downstream classification accuracy, and clustering validity—establishing a novel paradigm for few-shot visual learning.
📝 Abstract
Vision Transformers (ViTs) have demonstrated exceptional performance in various vision tasks. However, they tend to underperform on smaller datasets due to their inherent lack of inductive biases. Current approaches address this limitation implicitly-often by pairing ViTs with pretext tasks or by distilling knowledge from convolutional neural networks (CNNs) to strengthen the prior. In contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised framework, are inherently structured to preserve topology and spatial organization, making them a promising candidate to directly address the limitations of ViTs in limited or small training datasets. Despite this potential, equipping SOMs with modern deep learning architectures remains largely unexplored. In this study, we conduct a novel exploration on how Vision Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other, aiming to bridge this critical research gap. Our findings demonstrate that these architectures can synergistically enhance each other, leading to significantly improved performance in both unsupervised and supervised tasks. Code will be publicly available.