A Survey of Adversarial Defenses in Vision-based Systems: Categorization, Methods and Challenges

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial attacks pose a critical threat to the deployment of trustworthy AI in vision systems. Method: This paper proposes the first three-dimensional defense taxonomy framework integrating deployment stage, model lifecycle, and attack modality. Through a systematic literature review and cross-method comparative analysis, it constructs a structured knowledge graph covering image classification and object detection tasks. Contribution/Results: The framework precisely delineates applicability boundaries and limitations of 32 mainstream defense mechanisms. It establishes, for the first time, explicit mappings among defense strategies, attack types (white-box vs. black-box; digital vs. physical), and benchmark datasets, accompanied by an effectiveness reference graph. The work delivers a reproducible evaluation benchmark and practical design guidelines for developing robust, trustworthy AI vision systems.

Technology Category

Application Category

📝 Abstract
Adversarial attacks have emerged as a major challenge to the trustworthy deployment of machine learning models, particularly in computer vision applications. These attacks have a varied level of potency and can be implemented in both white box and black box approaches. Practical attacks include methods to manipulate the physical world and enforce adversarial behaviour by the corresponding target neural network models. Multiple different approaches to mitigate different kinds of such attacks are available in the literature, each with their own advantages and limitations. In this survey, we present a comprehensive systematization of knowledge on adversarial defenses, focusing on two key computer vision tasks: image classification and object detection. We review the state-of-the-art adversarial defense techniques and categorize them for easier comparison. In addition, we provide a schematic representation of these categories within the context of the overall machine learning pipeline, facilitating clearer understanding and benchmarking of defenses. Furthermore, we map these defenses to the types of adversarial attacks and datasets where they are most effective, offering practical insights for researchers and practitioners. This study is necessary for understanding the scope of how the available defenses are able to address the adversarial threats, and their shortcomings as well, which is necessary for driving the research in this area in the most appropriate direction, with the aim of building trustworthy AI systems for regular practical use-cases.
Problem

Research questions and friction points this paper is trying to address.

Address adversarial attacks in vision-based machine learning systems.
Categorize and compare state-of-the-art adversarial defense techniques.
Map defenses to attack types and datasets for practical insights.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematizes adversarial defense knowledge comprehensively
Categorizes state-of-the-art defense techniques effectively
Maps defenses to specific attacks and datasets
🔎 Similar Papers
No similar papers found.