🤖 AI Summary
Current vision-language models (VLMs) rely heavily on predefined class names for few-shot adaptation, limiting applicability in real-world scenarios where class names are unknown or uncontrollable. This work introduces the novel “vocabulary-free” few-shot learning setting—requiring only a small number of target-class images, with no class names or textual prompts whatsoever. To address this, we propose Similarity Mapping (SiM), a method that jointly leverages cross-modal similarity computation, construction of a generic prompt set, and lightweight mapping learning—fully eliminating handcrafted prompt engineering. SiM achieves remarkable efficiency (<1 second training time), strong interpretability, and robust generalization. It consistently outperforms strong baselines across multiple benchmarks. Our approach establishes a new paradigm for rapid VLM adaptation under zero-name conditions and delivers a practical, scalable solution for real-world deployment.
📝 Abstract
Recent advances in few-shot adaptation for Vision-Language Models (VLMs) have greatly expanded their ability to generalize across tasks using only a few labeled examples. However, existing approaches primarily build upon the strong zero-shot priors of these models by leveraging carefully designed, task-specific prompts. This dependence on predefined class names can restrict their applicability, especially in scenarios where exact class names are unavailable or difficult to specify. To address this limitation, we introduce vocabulary-free few-shot learning for VLMs, a setting where target class instances - that is, images - are available but their corresponding names are not. We propose Similarity Mapping (SiM), a simple yet effective baseline that classifies target instances solely based on similarity scores with a set of generic prompts (textual or visual), eliminating the need for carefully handcrafted prompts. Although conceptually straightforward, SiM demonstrates strong performance, operates with high computational efficiency (learning the mapping typically takes less than one second), and provides interpretability by linking target classes to generic prompts. We believe that our approach could serve as an important baseline for future research in vocabulary-free few-shot learning. Code is available at https://github.com/MaxZanella/vocabulary-free-FSL.