🤖 AI Summary
Existing adversarial detection methods exhibit poor generalization against both generative and physical adversarial attacks, failing to reliably detect unseen attack types. To address this, we propose the first universal detection framework grounded in the “open-cover structure” of adversarial noise distributions. We first establish a theoretical foundation showing that adversarial noise exhibits a geometric structure in feature space amenable to open-cover characterization. Building on this insight, we design a perturbation forgery mechanism comprising noise modeling, sparse mask guidance, and pseudo-adversarial sample synthesis—enabling robust detection across unknown gradient-based, generative, and physical attacks. Evaluated across multiple benchmarks and diverse attack families, our method achieves significant gains in detection accuracy while reducing inference overhead by 42%. It consistently outperforms state-of-the-art approaches in generalization capability, robustness, and efficiency.
📝 Abstract
As a defense strategy against adversarial attacks, adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data. Although previous detection methods achieve high performance in detecting gradient-based adversarial attacks, new attacks based on generative models with imbalanced and anisotropic noise patterns evade detection. Even worse, the significant inference time overhead and limited performance against unseen attacks make existing techniques impractical for real-world use. In this paper, we explore the proximity relationship among adversarial noise distributions and demonstrate the existence of an open covering for these distributions. By training on the open covering of adversarial noise distributions, a detector with strong generalization performance against various types of unseen attacks can be developed. Based on this insight, we heuristically propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting any unseen gradient-based, generative-based, and physical adversarial attacks. Comprehensive experiments conducted on multiple general and facial datasets, with a wide spectrum of attacks, validate the strong generalization of our method.