Defense That Attacks: How Robust Models Become Better Attackers

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that adversarial training—while enhancing model robustness—unintentionally strengthens models’ capacity to generate transferable adversarial examples, thereby exacerbating ecosystem-level risks in cross-model attacks. To systematically investigate this phenomenon, the authors conduct large-scale transferability experiments across a diverse “model zoo” comprising 36 CNN and Vision Transformer (ViT) architectures, evaluating how adversarial examples crafted from models trained under different strategies migrate across architectural boundaries. Results demonstrate that perturbations generated by adversarially trained models exhibit significantly higher transferability, consistently across multiple attack methods and model families. Crucially, this study is the first to incorporate “capability of generating transferable attacks” as a formal dimension of robustness evaluation, advocating for joint assessment of both a model’s resistance to attacks and its potential as an attack source. All models, code, and experimental scripts are publicly released.

Technology Category

Application Category

📝 Abstract
Deep learning has achieved great success in computer vision, but remains vulnerable to adversarial attacks. Adversarial training is the leading defense designed to improve model robustness. However, its effect on the transferability of attacks is underexplored. In this work, we ask whether adversarial training unintentionally increases the transferability of adversarial examples. To answer this, we trained a diverse zoo of 36 models, including CNNs and ViTs, and conducted comprehensive transferability experiments. Our results reveal a clear paradox: adversarially trained (AT) models produce perturbations that transfer more effectively than those from standard models, which introduce a new ecosystem risk. To enable reproducibility and further study, we release all models, code, and experimental scripts. Furthermore, we argue that robustness evaluations should assess not only the resistance of a model to transferred attacks but also its propensity to produce transferable adversarial examples.
Problem

Research questions and friction points this paper is trying to address.

Adversarial training's effect on attack transferability is underexplored.
It investigates if adversarially trained models increase adversarial example transferability.
The study reveals a paradox where robust models create more transferable attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances adversarial example transferability
Robustness evaluation includes attack transferability assessment
Released models and code for reproducibility and further study
🔎 Similar Papers
No similar papers found.
M
Mohamed Awad
Cyber-Physical Systems Lab, Egypt Japan University of Science and Technology, Alexandria, Egypt; Department of Statistics and Data Science, MBZUAI, Abu Dhabi, UAE
M
Mahmoud Akrm
Cyber-Physical Systems Lab, Egypt Japan University of Science and Technology, Alexandria, Egypt
Walid Gomaa
Walid Gomaa
Egypt Japan University of Science and Technology
Artificial Intelligence and Theoretical Computer Science