PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation

📅 2024-12-21
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing universal adversarial attacks for image segmentation exhibit limited transferability across samples and architectures, hindering robustness evaluation. Method: This paper proposes a novel hybrid universal adversarial attack method that jointly optimizes in both pixel and frequency domains. Specifically, it integrates dual-feature disentanglement in pixel space with low-frequency Fourier scattering in the frequency domain, forming a dual-branch collaborative optimization framework. A feature-decoupling module and multi-model joint optimization strategy are introduced to enhance perturbation generalization. Contribution/Results: The method achieves significant improvements in attack success rates across mainstream segmentation models—including DeepLabv3+ and SegFormer. Crucially, its cross-architecture transferability outperforms state-of-the-art methods by 12.6%, demonstrating superior generalization and practical utility for robustness assessment of segmentation models.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of deep learning, the model robustness has become a significant research hotspot, ie, adversarial attacks on deep neural networks. Existing works primarily focus on image classification tasks, aiming to alter the model's predicted labels. Due to the output complexity and deeper network architectures, research on adversarial examples for segmentation models is still limited, particularly for universal adversarial perturbations. In this paper, we propose a novel universal adversarial attack method designed for segmentation models, which includes dual feature separation and low-frequency scattering modules. The two modules guide the training of adversarial examples in the pixel and frequency space, respectively. Experiments demonstrate that our method achieves high attack success rates surpassing the state-of-the-art methods, and exhibits strong transferability across different models.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Attacks
Image Segmentation
Model Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

PB-UAP
Adversarial Attack
Image Segmentation
🔎 Similar Papers
No similar papers found.
Y
Yufei Song
School of Cyber Science and Engineering, Huazhong University of Science and Technology
Z
Ziqi Zhou
School of Computer Science and Technology, Huazhong University of Science and Technology
Minghui Li
Minghui Li
Huazhong University of Science and Technology
AI Security
Xianlong Wang
Xianlong Wang
Ph.D. student, City University of Hong Kong
Trustworthy LLM/VLMEmbodied AIUnlearnable Example3D Point CloudPoisoning/Adversarial Attack
M
Menghao Deng
School of Cyber Science and Engineering, Huazhong University of Science and Technology
W
Wei Wan
School of Cyber Science and Engineering, Huazhong University of Science and Technology
Shengshan Hu
Shengshan Hu
School of CSE, Huazhong University of Science and Technology (HUST)
AI SecurityEmbodied AIAutonomous Driving
L
Leo Yu Zhang
School of Information and Communication Technology, Griffith University