Quality Text, Robust Vision: The Role of Language in Enhancing Visual Robustness of Vision-Language Models

πŸ“… 2025-07-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing adversarial training methods for fine-tuning vision-language models (e.g., CLIP) to improve visual robustness overlook the quality and semantic richness of language supervision: supervised approaches rely on coarse class labels, leading to category overfitting; unsupervised methods lack semantic guidance, rendering them vulnerable to text-aware adversarial attacks. To address this, we propose QT-AFTβ€”the first framework to explicitly leverage high-quality textual descriptions (e.g., image captions and attribute annotations) for enhancing visual robustness. QT-AFT employs semantically rich text to guide both adversarial example generation and visual encoder fine-tuning, effectively unifying strengths of supervised and unsupervised paradigms. Evaluated across 16 zero-shot datasets, QT-AFT achieves state-of-the-art performance, significantly improving both adversarial robustness and clean accuracy. Our results establish textual description quality as a critical, previously underexplored dimension for advancing visual model robustness.

Technology Category

Application Category

πŸ“ Abstract
Defending pre-trained vision-language models (VLMs), such as CLIP, against adversarial attacks is crucial, as these models are widely used in diverse zero-shot tasks, including image classification. However, existing adversarial training (AT) methods for robust fine-tuning largely overlook the role of language in enhancing visual robustness. Specifically, (1) supervised AT methods rely on short texts (e.g., class labels) to generate adversarial perturbations, leading to overfitting to object classes in the training data, and (2) unsupervised AT avoids this overfitting but remains suboptimal against practical text-guided adversarial attacks due to its lack of semantic guidance. To address these limitations, we propose Quality Text-guided Adversarial Fine-Tuning (QT-AFT), which leverages high-quality captions during training to guide adversarial examples away from diverse semantics present in images. This enables the visual encoder to robustly recognize a broader range of image features even under adversarial noise, thereby enhancing robustness across diverse downstream tasks. QT-AFT overcomes the key weaknesses of prior methods -- overfitting in supervised AT and lack of semantic awareness in unsupervised AT -- achieving state-of-the-art zero-shot adversarial robustness and clean accuracy, evaluated across 16 zero-shot datasets. Furthermore, our comprehensive study uncovers several key insights into the role of language in enhancing vision robustness; for example, describing object properties in addition to object names further enhances zero-shot robustness. Our findings point to an urgent direction for future work -- centering high-quality linguistic supervision in robust visual representation learning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual robustness in VLMs using language guidance
Overcoming overfitting in supervised adversarial training methods
Addressing lack of semantic awareness in unsupervised adversarial training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages high-quality captions for training
Guides adversarial examples with diverse semantics
Enhances robustness across diverse downstream tasks
πŸ”Ž Similar Papers
No similar papers found.