🤖 AI Summary
This work addresses the limited transferability of black-box adversarial attacks by proposing a generative attack method leveraging self-supervised Vision Transformer (ViT) features. Methodologically, it is the first to jointly integrate contrastive learning and masked image modeling—two complementary self-supervised paradigms—to collaboratively extract both global structural and local textural features from ViT representations. An attention-guided generative adversarial network is further designed to optimize perturbations in feature space for enhanced generalization. Key contributions include: (1) a self-supervision-driven feature disentanglement mechanism, and (2) a ViT attention-guided perturbation generation strategy. Extensive experiments across diverse target models demonstrate significant improvements in black-box transfer success rates, consistently outperforming state-of-the-art methods by 4.2%–9.7% on average. These results validate the efficacy of self-supervised representations for modeling adversarial robustness.
📝 Abstract
The ability of deep neural networks (DNNs) come from extracting and interpreting features from the data provided. By exploiting intermediate features in DNNs instead of relying on hard labels, we craft adversarial perturbation that generalize more effectively, boosting black-box transferability. These features ubiquitously come from supervised learning in previous work. Inspired by the exceptional synergy between self-supervised learning and the Transformer architecture, this paper explores whether exploiting self-supervised Vision Transformer (ViT) representations can improve adversarial transferability. We present dSVA -- a generative dual self-supervised ViT features attack, that exploits both global structural features from contrastive learning (CL) and local textural features from masked image modeling (MIM), the self-supervised learning paradigm duo for ViTs. We design a novel generative training framework that incorporates a generator to create black-box adversarial examples, and strategies to train the generator by exploiting joint features and the attention mechanism of self-supervised ViTs. Our findings show that CL and MIM enable ViTs to attend to distinct feature tendencies, which, when exploited in tandem, boast great adversarial generalizability. By disrupting dual deep features distilled by self-supervised ViTs, we are rewarded with remarkable black-box transferability to models of various architectures that outperform state-of-the-arts. Code available at https://github.com/spencerwooo/dSVA.