Multi-Cache Enhanced Prototype Learning for Test-Time Generalization of Vision-Language Models

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In zero-shot test-time adaptation, low-entropy samples become unreliable under distribution shifts, leading to poor intra-class compactness of learned prototypes. Method: We propose a multi-cache-augmented prototype learning framework that jointly leverages vision–language modalities. It introduces three complementary caching mechanisms—entropy cache, alignment cache, and negative cache—to enable cross-modal prototype alignment and negative-sample calibration; additionally, prototype residual fine-tuning is incorporated to explicitly model and optimize intra-class compactness. Contribution/Results: Theoretical analysis and empirical validation establish a positive correlation between cache quality and intra-class compactness. Our method achieves significant improvements in generalization across 15 downstream tasks, attaining state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
In zero-shot setting, test-time adaptation adjusts pre-trained models using unlabeled data from the test phase to enhance performance on unknown test distributions. Existing cache-enhanced TTA methods rely on a low-entropy criterion to select samples for prototype construction, assuming intra-class compactness. However, low-entropy samples may be unreliable under distribution shifts, and the resulting prototypes may not ensure compact intra-class distributions. This study identifies a positive correlation between cache-enhanced performance and intra-class compactness. Based on this observation, we propose a Multi-Cache enhanced Prototype-based Test-Time Adaptation (MCP) featuring three caches: an entropy cache for initializing prototype representations with low-entropy samples, an align cache for integrating visual and textual information to achieve compact intra-class distributions, and a negative cache for prediction calibration using high-entropy samples. We further developed MCP++, a framework incorporating cross-modal prototype alignment and residual learning, introducing prototype residual fine-tuning. Comparative and ablation experiments across 15 downstream tasks demonstrate that the proposed method and framework achieve state-of-the-art generalization performance.
Problem

Research questions and friction points this paper is trying to address.

Improves test-time generalization for vision-language models
Addresses unreliable low-entropy samples under distribution shifts
Enhances intra-class compactness via multi-cache prototype learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Cache enhanced Prototype-based Test-Time Adaptation
Cross-modal prototype alignment and residual learning
Prototype residual fine-tuning for generalization
🔎 Similar Papers
No similar papers found.
X
Xinyu Chen
Shanghai University
Haotian Zhai
Haotian Zhai
University of Minnesota Twin Cities
Test Time AdaptionReinforcement learningMultimodal
C
Can Zhang
Beijing University of Chemical Technology
X
Xiupeng Shi
Shanghai University
R
Ruirui Li
Beijing University of Chemical Technology