π€ AI Summary
Pedestrian re-identification (re-ID) models are vulnerable to adversarial attacks; however, existing methods focus solely on cross-model or cross-dataset transferability, neglecting robustness evaluation under cross-test-domain settingsβi.e., generating effective perturbations that generalize across models trained on distinct source domains. To address this gap, we propose Meta-Transferable Generative Attack (MTGA), the first framework unifying black-box adversarial attacks across models, datasets, and test domains. Methodologically, MTGA introduces a stochastic perturbation erasing module and a multi-domain normalization statistics mixing strategy, synergistically integrating meta-learning with generative adversarial training to enhance domain-invariant generalization. Extensive experiments on standard benchmarks demonstrate that MTGA achieves an average 11.3% increase in mAP degradation over state-of-the-art methods, significantly improving attack transferability. The implementation is publicly available.
π Abstract
Deep learning-based person re-identification (reid) models are widely employed in surveillance systems and inevitably inherit the vulnerability of deep networks to adversarial attacks. Existing attacks merely consider cross-dataset and cross-model transferability, ignoring the cross-test capability to perturb models trained in different domains. To powerfully examine the robustness of real-world re-id models, the Meta Transferable Generative Attack (MTGA) method is proposed, which adopts meta-learning optimization to promote the generative attacker producing highly transferable adversarial examples by learning comprehensively simulated transfer-based crossmodel&dataset&test black-box meta attack tasks. Specifically, cross-model&dataset black-box attack tasks are first mimicked by selecting different re-id models and datasets for meta-train and meta-test attack processes. As different models may focus on different feature regions, the Perturbation Random Erasing module is further devised to prevent the attacker from learning to only corrupt model-specific features. To boost the attacker learning to possess cross-test transferability, the Normalization Mix strategy is introduced to imitate diverse feature embedding spaces by mixing multi-domain statistics of target models. Extensive experiments show the superiority of MTGA, especially in cross-model&dataset and cross-model&dataset&test attacks, our MTGA outperforms the SOTA methods by 20.0% and 11.3% on mean mAP drop rate, respectively. The source codes are available at https://github.com/yuanbianGit/MTGA.