🤖 AI Summary
In zero-shot test-time adaptation, low-entropy samples become unreliable under distribution shifts, leading to poor intra-class compactness of learned prototypes. Method: We propose a multi-cache-augmented prototype learning framework that jointly leverages vision–language modalities. It introduces three complementary caching mechanisms—entropy cache, alignment cache, and negative cache—to enable cross-modal prototype alignment and negative-sample calibration; additionally, prototype residual fine-tuning is incorporated to explicitly model and optimize intra-class compactness. Contribution/Results: Theoretical analysis and empirical validation establish a positive correlation between cache quality and intra-class compactness. Our method achieves significant improvements in generalization across 15 downstream tasks, attaining state-of-the-art performance.
📝 Abstract
In zero-shot setting, test-time adaptation adjusts pre-trained models using unlabeled data from the test phase to enhance performance on unknown test distributions. Existing cache-enhanced TTA methods rely on a low-entropy criterion to select samples for prototype construction, assuming intra-class compactness. However, low-entropy samples may be unreliable under distribution shifts, and the resulting prototypes may not ensure compact intra-class distributions. This study identifies a positive correlation between cache-enhanced performance and intra-class compactness. Based on this observation, we propose a Multi-Cache enhanced Prototype-based Test-Time Adaptation (MCP) featuring three caches: an entropy cache for initializing prototype representations with low-entropy samples, an align cache for integrating visual and textual information to achieve compact intra-class distributions, and a negative cache for prediction calibration using high-entropy samples. We further developed MCP++, a framework incorporating cross-modal prototype alignment and residual learning, introducing prototype residual fine-tuning. Comparative and ablation experiments across 15 downstream tasks demonstrate that the proposed method and framework achieve state-of-the-art generalization performance.