Distillation-Enhanced Physical Adversarial Attacks

📅 2025-01-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physical-world adversarial patches face a fundamental trade-off between visual stealth and attack effectiveness. To address this, this paper proposes a knowledge distillation–based method for generating physically realizable adversarial patches. We introduce, for the first time, knowledge distillation into physical adversarial attacks via a teacher–student co-optimization framework. The method integrates environment-aware adaptive color-space modeling for enhanced stealth, constraint-free spatial adversarial optimization, an adversarial knowledge distillation module, and end-to-end differentiable physical rendering—enabling high visual imperceptibility and strong robustness against real-world lighting and viewpoint variations. Experiments demonstrate that our approach improves attack success rate by 20% while maintaining significant visual invisibility, substantially enhancing the practicality and efficacy of adversarial attacks against AI recognition systems in realistic deployment scenarios.

Technology Category

Application Category

📝 Abstract
The study of physical adversarial patches is crucial for identifying vulnerabilities in AI-based recognition systems and developing more robust deep learning models. While recent research has focused on improving patch stealthiness for greater practical applicability, achieving an effective balance between stealth and attack performance remains a significant challenge. To address this issue, we propose a novel physical adversarial attack method that leverages knowledge distillation. Specifically, we first define a stealthy color space tailored to the target environment to ensure smooth blending. Then, we optimize an adversarial patch in an unconstrained color space, which serves as the 'teacher' patch. Finally, we use an adversarial knowledge distillation module to transfer the teacher patch's knowledge to the 'student' patch, guiding the optimization of the stealthy patch. Experimental results show that our approach improves attack performance by 20%, while maintaining stealth, highlighting its practical value.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Patches
Physical World
AI Recognition Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Distillation
Adversarial Patches
Stealthy Attacks
W
Wei Liu
Tsinghua University, Beijing, China
Y
Yonglin Wu
Tsinghua University, Beijing, China
Chaoqun Li
Chaoqun Li
Algorithm engineer, Qiyuan Lab
adversarial attack
Zhuodong Liu
Zhuodong Liu
Qiyuan Lab
H
Huanqian Yan
Beihang University, Beijing, China