🤖 AI Summary
Human perceptual priors are typically incorporated only during downstream fine-tuning in visual representation learning, failing to enhance the model’s initial representation capability.
Method: We propose Perceptual Initialization (PI), which injects human perceptual structure—derived from the NIGHTS triplet—directly into visual encoder parameter initialization, followed by self-supervised pretraining on YFCC15M. This is the first approach to embed perceptual priors at initialization rather than solely at fine-tuning, enabling zero-shot cross-task generalization without task-specific adaptation.
Results: PI achieves state-of-the-art performance across 29 zero-shot classification and retrieval benchmarks. On ImageNet-1K, zero-shot Top-1 accuracy improves significantly after only 15 pretraining epochs; retrieval metrics (e.g., R@1, R@5) show consistent and stable gains. These results demonstrate that perception-guided initialization fundamentally enhances vision-language alignment and generalization capacity.
📝 Abstract
We introduce Perceptual-Initialization (PI), a paradigm shift in visual representation learning that incorporates human perceptual structure during the initialization phase rather than as a downstream fine-tuning step. By integrating human-derived triplet embeddings from the NIGHTS dataset to initialize a CLIP vision encoder, followed by self-supervised learning on YFCC15M, our approach demonstrates significant zero-shot performance improvements, without any task-specific fine-tuning, across 29 zero shot classification and 2 retrieval benchmarks. On ImageNet-1K, zero-shot gains emerge after approximately 15 epochs of pretraining. Benefits are observed across datasets of various scales, with improvements manifesting at different stages of the pretraining process depending on dataset characteristics. Our approach consistently enhances zero-shot top-1 accuracy, top-5 accuracy, and retrieval recall (e.g., R@1, R@5) across these diverse evaluation tasks, without requiring any adaptation to target domains. These findings challenge the conventional wisdom of using human-perceptual data primarily for fine-tuning and demonstrate that embedding human perceptual structure during early representation learning yields more capable and vision-language aligned systems that generalize immediately to unseen tasks. Our work shows that"beginning with you", starting with human perception, provides a stronger foundation for general-purpose vision-language intelligence.