Accelerating Conditional Prompt Learning via Masked Image Modeling for Vision-Language Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (e.g., CLIP) suffer from task-specific overfitting and poor cross-class generalization in zero-shot transfer. To address this, we propose ProMIM—a lightweight, plug-and-play framework that synergistically integrates masked image modeling (MIM) with conditional prompt learning. Its core innovation lies in dynamically generating robust, instance-level prompt vectors from local features of visible image patches, without modifying the backbone architecture. ProMIM seamlessly augments existing prompt-based methods (e.g., CoOp and CoCoOp), significantly enhancing generalization to unseen classes in both zero-shot and few-shot classification. Extensive experiments demonstrate consistent performance gains—averaging +1.2–3.8% accuracy—across diverse benchmarks, while incurring negligible computational overhead. The method thus achieves a favorable trade-off between efficacy, efficiency, and practical deployability.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) like CLIP excel in zero-shot learning but often require resource-intensive training to adapt to new tasks. Prompt learning techniques, such as CoOp and CoCoOp, offer efficient adaptation but tend to overfit to known classes, limiting generalization to unseen categories. We introduce ProMIM, a plug-and-play framework that enhances conditional prompt learning by integrating masked image modeling (MIM) into existing VLM pipelines. ProMIM leverages a simple yet effective masking strategy to generate robust, instance-conditioned prompts, seamlessly augmenting methods like CoOp and CoCoOp without altering their core architectures. By masking only visible image patches and using these representations to guide prompt generation, ProMIM improves feature robustness and mitigates overfitting, all while introducing negligible additional computational cost. Extensive experiments across zero-shot and few-shot classification tasks demonstrate that ProMIM consistently boosts generalization performance when plugged into existing approaches, providing a practical, lightweight solution for real-world vision-language applications.
Problem

Research questions and friction points this paper is trying to address.

Enhance conditional prompt learning for vision-language models
Mitigate overfitting in prompt learning for unseen categories
Improve feature robustness with masked image modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates masked image modeling into VLMs
Uses masking strategy for robust prompts
Enhances generalization with minimal cost
🔎 Similar Papers
No similar papers found.