🤖 AI Summary
This work addresses the inefficiency and poor localization of existing adversarial attacks on large vision-language models under limited perturbation budgets. To overcome these limitations, we propose SAGA (Staged Attention-Guided Adversarial Attack), a novel framework that, for the first time, reveals a positive correlation between regional attention and sensitivity to adversarial loss. Leveraging this insight, SAGA dynamically focuses perturbations on high-sensitivity regions through an attention redistribution mechanism, enabling structured and progressive perturbation generation. By integrating attention map analysis, staged optimization, and pixel-level perturbation constraints, our method achieves state-of-the-art attack success rates across ten mainstream vision-language models while producing highly imperceptible adversarial examples.
📝 Abstract
Adversarial attacks against Large Vision-Language Models (LVLMs) are crucial for exposing safety vulnerabilities in modern multimodal systems. Recent attacks based on input transformations, such as random cropping, suggest that spatially localized perturbations can be more effective than global image manipulation. However, randomly cropping the entire image is inherently stochastic and fails to use the limited per-pixel perturbation budget efficiently. We make two key observations: (i) regional attention scores are positively correlated with adversarial loss sensitivity, and (ii) attacking high-attention regions induces a structured redistribution of attention toward subsequent salient regions. Based on these findings, we propose Stage-wise Attention-Guided Attack (SAGA), an attention-guided framework that progressively concentrates perturbations on high-attention regions. SAGA enables more efficient use of constrained perturbation budgets, producing highly imperceptible adversarial examples while consistently achieving state-of-the-art attack success rates across ten LVLMs. The source code is available at https://github.com/jackwaky/SAGA.