π€ AI Summary
Existing test-time prompt tuning methods suffer from three key limitations under test-time distribution shifts: overreliance on image-only augmentations, neglect of multimodal information, and poor few-shot generalization. To address these, we propose IT3A (Image-Text Test-Time Adaptation), a novel multimodal test-time adaptation framework. IT3A is the first to introduce generative multimodal augmentation at test time, leveraging a pretrained diffusion model to jointly synthesize aligned imageβtext augmented samples. It further introduces a cross-modal logits-level cosine similarity filtering mechanism to ensure semantic consistency between generated modalities. Instead of standard prompt tuning, IT3A employs lightweight adapters for greater template flexibility and robustness. Evaluated across diverse distribution shift benchmarks, IT3A achieves an average zero-shot accuracy improvement of 5.50%, significantly outperforming state-of-the-art test-time prompt tuning approaches.
π Abstract
Existing test-time prompt tuning (TPT) methods focus on single-modality data, primarily enhancing images and using confidence ratings to filter out inaccurate images. However, while image generation models can produce visually diverse images, single-modality data enhancement techniques still fail to capture the comprehensive knowledge provided by different modalities. Additionally, we note that the performance of TPT-based methods drops significantly when the number of augmented images is limited, which is not unusual given the computational expense of generative augmentation. To address these issues, we introduce IT3A, a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains. By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data. Additionally, to ensure that key semantics are accurately retained when generating various visual and text enhancements, we employ cosine similarity filtering between the logits of the enhanced images and text with the original test data. This process allows us to filter out some spurious augmentation and inadequate combinations. To leverage the diverse enhancements provided by the generation model across different modals, we have replaced prompt tuning with an adapter for greater flexibility in utilizing text templates. Our experiments on the test datasets with distribution shifts and domain gaps show that in a zero-shot setting, IT3A outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.