🤖 AI Summary
This work addresses the semantic gap between image and text modalities in CLIP-style vision-language models, which hinders downstream task performance. We propose CLIP-Refine, a lightweight post-pretraining framework. Its core innovations are two novel mechanisms: (1) Random Feature Alignment (RaFA), which enables efficient cross-modal alignment via stochastic reference vector sampling and shared prior constraints; and (2) Hybrid Contrastive Distillation (HyCD), which jointly optimizes contrastive learning and knowledge distillation by integrating ground-truth pairs with CLIP-generated soft labels. CLIP-Refine requires only one epoch of fine-tuning on a small-scale dataset, introduces no architectural modifications, and fully preserves zero-shot generalization capability. Experiments demonstrate consistent improvements across multi-task classification and cross-modal retrieval benchmarks, achieving an average +2.3% gain in zero-shot accuracy while reducing training cost by over 90%.
📝 Abstract
Contrastive language image pre-training (CLIP) is an essential component of building modern vision-language foundation models. While CLIP demonstrates remarkable zero-shot performance on downstream tasks, the multi-modal feature spaces still suffer from a modality gap, which is a gap between image and text feature clusters and limits downstream task performance. Although existing works attempt to address the modality gap by modifying pre-training or fine-tuning, they struggle with heavy training costs with large datasets or degradations of zero-shot performance. This paper presents CLIP-Refine, a post-pre-training method for CLIP models at a phase between pre-training and fine-tuning. CLIP-Refine aims to align the feature space with 1 epoch training on small image-text datasets without zero-shot performance degradations. To this end, we introduce two techniques: random feature alignment (RaFA) and hybrid contrastive-distillation (HyCD). RaFA aligns the image and text features to follow a shared prior distribution by minimizing the distance to random reference vectors sampled from the prior. HyCD updates the model with hybrid soft labels generated by combining ground-truth image-text pair labels and outputs from the pre-trained CLIP model. This contributes to achieving both maintaining the past knowledge and learning new knowledge to align features. Our extensive experiments with multiple classification and retrieval tasks show that CLIP-Refine succeeds in mitigating the modality gap and improving the zero-shot performance.