Detecting Adversarial Data using Perturbation Forgery

📅 2024-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial detection methods exhibit poor generalization against both generative and physical adversarial attacks, failing to reliably detect unseen attack types. To address this, we propose the first universal detection framework grounded in the “open-cover structure” of adversarial noise distributions. We first establish a theoretical foundation showing that adversarial noise exhibits a geometric structure in feature space amenable to open-cover characterization. Building on this insight, we design a perturbation forgery mechanism comprising noise modeling, sparse mask guidance, and pseudo-adversarial sample synthesis—enabling robust detection across unknown gradient-based, generative, and physical attacks. Evaluated across multiple benchmarks and diverse attack families, our method achieves significant gains in detection accuracy while reducing inference overhead by 42%. It consistently outperforms state-of-the-art approaches in generalization capability, robustness, and efficiency.

Technology Category

Application Category

📝 Abstract
As a defense strategy against adversarial attacks, adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data. Although previous detection methods achieve high performance in detecting gradient-based adversarial attacks, new attacks based on generative models with imbalanced and anisotropic noise patterns evade detection. Even worse, the significant inference time overhead and limited performance against unseen attacks make existing techniques impractical for real-world use. In this paper, we explore the proximity relationship among adversarial noise distributions and demonstrate the existence of an open covering for these distributions. By training on the open covering of adversarial noise distributions, a detector with strong generalization performance against various types of unseen attacks can be developed. Based on this insight, we heuristically propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting any unseen gradient-based, generative-based, and physical adversarial attacks. Comprehensive experiments conducted on multiple general and facial datasets, with a wide spectrum of attacks, validate the strong generalization of our method.
Problem

Research questions and friction points this paper is trying to address.

Detecting adversarial data with strong generalization against unseen attacks
Overcoming limitations of existing methods in detecting generative-based adversarial attacks
Reducing inference time overhead for practical real-world adversarial detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open covering of adversarial noise distributions
Perturbation Forgery with noise distribution perturbation
Sparse mask generation for pseudo-adversarial data
🔎 Similar Papers
No similar papers found.
Q
Qian Wang
Huazhong University of Science and Technology, China
C
Chen Li
Wuhan University, China
Y
Yuchen Luo
Wuhan University, China
H
Hefei Ling
Huazhong University of Science and Technology, China
S
Shijuan Huang
Huazhong University of Science and Technology, China
Ruoxi Jia
Ruoxi Jia
Assistant Professor, Virginia Tech
Machine LearningPrivacySecurityData Economy
N
Ning Yu
Salesforce Research, USA