CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing projection-based adversarial attacks are constrained to single classifiers and fixed camera poses, exhibiting poor generalizability across diverse models and viewpoints. To address this, we propose the first universal projection attack framework jointly optimizing for multi-classifier compatibility and multi-view robustness. Our method introduces a classifier-agnostic adversarial loss, an attention-driven gradient weighting mechanism guided by class activation maps, and a multi-model gradient aggregation strategy. By optimizing projected light patterns, the framework enables black-box attacks that transfer across heterogeneous architectures (e.g., CNNs and Vision Transformers) and arbitrary camera poses. Extensive experiments demonstrate significant improvements over state-of-the-art projection attacks: +12.7% average attack success rate and enhanced visual imperceptibility under diverse viewing conditions and model ensembles.

Technology Category

Application Category

📝 Abstract
Projector-based adversarial attack aims to project carefully designed light patterns (i.e., adversarial projections) onto scenes to deceive deep image classifiers. It has potential applications in privacy protection and the development of more robust classifiers. However, existing approaches primarily focus on individual classifiers and fixed camera poses, often neglecting the complexities of multi-classifier systems and scenarios with varying camera poses. This limitation reduces their effectiveness when introducing new classifiers or camera poses. In this paper, we introduce Classifier-Agnostic Projector-Based Adversarial Attack (CAPAA) to address these issues. First, we develop a novel classifier-agnostic adversarial loss and optimization framework that aggregates adversarial and stealthiness loss gradients from multiple classifiers. Then, we propose an attention-based gradient weighting mechanism that concentrates perturbations on regions of high classification activation, thereby improving the robustness of adversarial projections when applied to scenes with varying camera poses. Our extensive experimental evaluations demonstrate that CAPAA achieves both a higher attack success rate and greater stealthiness compared to existing baselines. Codes are available at: https://github.com/ZhanLiQxQ/CAPAA.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in projector-based adversarial attacks for multi-classifier systems
Improves robustness of adversarial projections with varying camera poses
Enhances attack success rate and stealthiness across diverse classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classifier-agnostic adversarial loss optimization
Attention-based gradient weighting mechanism
Robust adversarial projections for varying poses
🔎 Similar Papers
No similar papers found.