Breaking the Illusion of Security via Interpretation: Interpretable Vision Transformer Systems under Attack

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals a critical co-vulnerability between Vision Transformers (ViTs) and their interpretability modules in safety-critical applications (e.g., medical diagnosis, autonomous driving). To address this, we propose AdViT—a novel joint adversarial attack framework targeting both ViT classification and its explanation module. AdViT employs gradient-based optimization coupled with explanation consistency constraints to simultaneously manipulate classification decisions and attribution maps under both white-box and black-box settings. Experimental results demonstrate 100% attack success across diverse ViT architectures; misclassification confidence reaches 98% (white-box) and 76% (black-box). Generated adversarial examples achieve both high-confidence mispredictions and high-fidelity explanations, significantly enhancing stealthiness. These findings fundamentally challenge the widely held assumption that “interpretability implies robustness.”

Technology Category

Application Category

📝 Abstract
Vision transformer (ViT) models, when coupled with interpretation models, are regarded as secure and challenging to deceive, making them well-suited for security-critical domains such as medical applications, autonomous vehicles, drones, and robotics. However, successful attacks on these systems can lead to severe consequences. Recent research on threats targeting ViT models primarily focuses on generating the smallest adversarial perturbations that can deceive the models with high confidence, without considering their impact on model interpretations. Nevertheless, the use of interpretation models can effectively assist in detecting adversarial examples. This study investigates the vulnerability of transformer models to adversarial attacks, even when combined with interpretation models. We propose an attack called "AdViT" that generates adversarial examples capable of misleading both a given transformer model and its coupled interpretation model. Through extensive experiments on various transformer models and two transformer-based interpreters, we demonstrate that AdViT achieves a 100% attack success rate in both white-box and black-box scenarios. In white-box scenarios, it reaches up to 98% misclassification confidence, while in black-box scenarios, it reaches up to 76% misclassification confidence. Remarkably, AdViT consistently generates accurate interpretations in both scenarios, making the adversarial examples more difficult to detect.
Problem

Research questions and friction points this paper is trying to address.

Study vulnerability of ViT models with interpreters under attacks
Propose AdViT attack deceiving both model and interpretation system
Achieve high attack success in white-box and black-box scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdViT attack misleads ViT and interpretation models
Achieves 100% success in white-box and black-box
Generates accurate interpretations to evade detection
🔎 Similar Papers
No similar papers found.