Learning to Learn Transferable Generative Attack for Person Re-Identification

πŸ“… 2024-09-06
πŸ›οΈ IEEE Transactions on Image Processing
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Pedestrian re-identification (re-ID) models are vulnerable to adversarial attacks; however, existing methods focus solely on cross-model or cross-dataset transferability, neglecting robustness evaluation under cross-test-domain settingsβ€”i.e., generating effective perturbations that generalize across models trained on distinct source domains. To address this gap, we propose Meta-Transferable Generative Attack (MTGA), the first framework unifying black-box adversarial attacks across models, datasets, and test domains. Methodologically, MTGA introduces a stochastic perturbation erasing module and a multi-domain normalization statistics mixing strategy, synergistically integrating meta-learning with generative adversarial training to enhance domain-invariant generalization. Extensive experiments on standard benchmarks demonstrate that MTGA achieves an average 11.3% increase in mAP degradation over state-of-the-art methods, significantly improving attack transferability. The implementation is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Deep learning-based person re-identification (reid) models are widely employed in surveillance systems and inevitably inherit the vulnerability of deep networks to adversarial attacks. Existing attacks merely consider cross-dataset and cross-model transferability, ignoring the cross-test capability to perturb models trained in different domains. To powerfully examine the robustness of real-world re-id models, the Meta Transferable Generative Attack (MTGA) method is proposed, which adopts meta-learning optimization to promote the generative attacker producing highly transferable adversarial examples by learning comprehensively simulated transfer-based crossmodel&dataset&test black-box meta attack tasks. Specifically, cross-model&dataset black-box attack tasks are first mimicked by selecting different re-id models and datasets for meta-train and meta-test attack processes. As different models may focus on different feature regions, the Perturbation Random Erasing module is further devised to prevent the attacker from learning to only corrupt model-specific features. To boost the attacker learning to possess cross-test transferability, the Normalization Mix strategy is introduced to imitate diverse feature embedding spaces by mixing multi-domain statistics of target models. Extensive experiments show the superiority of MTGA, especially in cross-model&dataset and cross-model&dataset&test attacks, our MTGA outperforms the SOTA methods by 20.0% and 11.3% on mean mAP drop rate, respectively. The source codes are available at https://github.com/yuanbianGit/MTGA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial attack transferability across re-id models
Improving cross-test capability for domain-robust perturbations
Meta-learning optimization for comprehensive black-box attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning optimizes generative adversarial attack transferability
Perturbation Random Erasing prevents model-specific feature corruption
Normalization Mix mimics diverse feature embedding spaces
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuan Bian
College of Electrical and Information Engineering at Hunan University and National Engineering Research Center of Robot Visual Perception and Control Technology, Changsha, Hunan, China
M
Min Liu
College of Electrical and Information Engineering at Hunan University and National Engineering Research Center of Robot Visual Perception and Control Technology, Changsha, Hunan, China
Xueping Wang
Xueping Wang
Hunan Normal University
computer vision
Y
Yunfeng Ma
College of Electrical and Information Engineering at Hunan University and National Engineering Research Center of Robot Visual Perception and Control Technology, Changsha, Hunan, China
Y
Yaonan Wang
College of Electrical and Information Engineering at Hunan University and National Engineering Research Center of Robot Visual Perception and Control Technology, Changsha, Hunan, China