Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial attacks pose dual challenges in computer vision—undermining model robustness and security, yet also offering potential for enhancing defense mechanisms. Method: This work establishes a unified technical taxonomy spanning three adversarial paradigms: pixel-space attacks (e.g., FGSM, PGD, momentum-based optimization), physically realizable attacks (e.g., adversarial patches, 3D-texture perturbations, optical distortions), and latent-space attacks (e.g., semantic-aware perturbations, adaptive step-size schemes, transferability enhancement). It systematically analyzes their evolutionary trajectories and intrinsic limitations. Contribution/Results: The paper introduces a novel “co-design of attack and defense” classification framework, the first to explicitly identify open challenges—including adversarial robustness in neural style transfer and computational efficiency optimization. By unifying theoretical analysis with practical attack–defense insights, this study provides foundational principles and actionable technical guidance for developing trustworthy, robust vision systems.

Technology Category

Application Category

📝 Abstract
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security. This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools. We provide a systematic analysis of adversarial attack methodologies across three primary domains: pixel-space attacks, physically realizable attacks, and latent-space attacks. Our investigation traces the technical evolution from early gradient-based methods such as FGSM and PGD to sophisticated optimization techniques incorporating momentum, adaptive step sizes, and advanced transferability mechanisms. We examine how physically realizable attacks have successfully bridged the gap between digital vulnerabilities and real-world threats through adversarial patches, 3D textures, and dynamic optical perturbations. Additionally, we explore the emergence of latent-space attacks that leverage semantic structure in internal representations to create more transferable and meaningful adversarial examples. Beyond traditional offensive applications, we investigate the constructive use of adversarial techniques for vulnerability assessment in biometric authentication systems and protection against malicious generative models. Our analysis reveals critical research gaps, particularly in neural style transfer protection and computational efficiency requirements. This survey contributes a comprehensive taxonomy, evolution analysis, and identification of future research directions, aiming to advance understanding of adversarial vulnerabilities and inform the development of more robust and trustworthy computer vision systems.
Problem

Research questions and friction points this paper is trying to address.

Examining adversarial attacks as threats and defenses in computer vision systems
Analyzing attack methods in pixel-space, physical, and latent-space domains
Investigating adversarial techniques for vulnerability assessment and protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing pixel-space, physical, latent-space attack methods
Exploring adversarial techniques for defensive applications
Developing taxonomy and future research directions
🔎 Similar Papers
No similar papers found.