Diffusion-Enhanced Test-time Adaptation with Text and Image Augmentation

πŸ“… 2024-12-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing test-time prompt tuning methods suffer from three key limitations under test-time distribution shifts: overreliance on image-only augmentations, neglect of multimodal information, and poor few-shot generalization. To address these, we propose IT3A (Image-Text Test-Time Adaptation), a novel multimodal test-time adaptation framework. IT3A is the first to introduce generative multimodal augmentation at test time, leveraging a pretrained diffusion model to jointly synthesize aligned image–text augmented samples. It further introduces a cross-modal logits-level cosine similarity filtering mechanism to ensure semantic consistency between generated modalities. Instead of standard prompt tuning, IT3A employs lightweight adapters for greater template flexibility and robustness. Evaluated across diverse distribution shift benchmarks, IT3A achieves an average zero-shot accuracy improvement of 5.50%, significantly outperforming state-of-the-art test-time prompt tuning approaches.

Technology Category

Application Category

πŸ“ Abstract
Existing test-time prompt tuning (TPT) methods focus on single-modality data, primarily enhancing images and using confidence ratings to filter out inaccurate images. However, while image generation models can produce visually diverse images, single-modality data enhancement techniques still fail to capture the comprehensive knowledge provided by different modalities. Additionally, we note that the performance of TPT-based methods drops significantly when the number of augmented images is limited, which is not unusual given the computational expense of generative augmentation. To address these issues, we introduce IT3A, a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains. By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data. Additionally, to ensure that key semantics are accurately retained when generating various visual and text enhancements, we employ cosine similarity filtering between the logits of the enhanced images and text with the original test data. This process allows us to filter out some spurious augmentation and inadequate combinations. To leverage the diverse enhancements provided by the generation model across different modals, we have replaced prompt tuning with an adapter for greater flexibility in utilizing text templates. Our experiments on the test datasets with distribution shifts and domain gaps show that in a zero-shot setting, IT3A outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Information Fusion
Limited Data Enhancement
Cross-Domain Adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

IT3A
Diffusion-enhanced Test-time Adaptation
Multimodal Augmentation
πŸ”Ž Similar Papers
No similar papers found.
Chun-Mei Feng
Chun-Mei Feng
Assistant Professor/Ad Astra Fellow, University College Dublin, Ireland
AI for HealthCareMulti-modal LearningFederated Learning
Y
Yuanyang He
National University of Singapore, Singapore
Jian Zou
Jian Zou
University of Electronic Science and Technology of China
Li-ion batteryEnergy storage materials.
S
Salman H. Khan
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), UAE, and Australian National University, Canberra ACT, Australia
Huan Xiong
Huan Xiong
Harbin Institute of Technology
CombinatoricsMachine Learning
Z
Zhen Li
Chinese University of Hong Kong, Shenzhen, China
Wangmeng Zuo
Wangmeng Zuo
School of Computer Science and Technology, Harbin Institute of Technology
Computer VisionImage ProcessingGenerative AIDeep LearningBiometrics
R
R. Goh
Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
Y
Yong Liu
Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore