Revisiting CroPA: A Reproducibility Study and Enhancements for Cross-Prompt Adversarial Transferability in Vision-Language Models

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited adversarial transferability of large vision-language models (VLMs) across diverse prompts. To overcome the insufficient cross-prompt transferability of existing Cross-Prompt Attack (CroPA) methods, we propose three key innovations: (1) a semantics-aware prompt initialization strategy, (2) a universal perturbation learning framework tailored for multi-prompt generalization, and (3) a customized loss function that couples visual encoder self-attention weights. We systematically evaluate our approach on prominent VLMs—including Flamingo, BLIP-2, InstructBLIP, and LLaVA—demonstrating substantial improvements in both cross-prompt and cross-image transferability. Under various black-box transfer settings, our method achieves average attack success rate gains of 12.7–28.4% over baselines. Moreover, it exhibits strong generalization across unseen prompts and images, confirming its robustness and scalability.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (VLMs) have revolutionized computer vision, enabling tasks such as image classification, captioning, and visual question answering. However, they remain highly vulnerable to adversarial attacks, particularly in scenarios where both visual and textual modalities can be manipulated. In this study, we conduct a comprehensive reproducibility study of "An Image is Worth 1000 Lies: Adversarial Transferability Across Prompts on Vision-Language Models" validating the Cross-Prompt Attack (CroPA) and confirming its superior cross-prompt transferability compared to existing baselines. Beyond replication we propose several key improvements: (1) A novel initialization strategy that significantly improves Attack Success Rate (ASR). (2) Investigate cross-image transferability by learning universal perturbations. (3) A novel loss function targeting vision encoder attention mechanisms to improve generalization. Our evaluation across prominent VLMs -- including Flamingo, BLIP-2, and InstructBLIP as well as extended experiments on LLaVA validates the original results and demonstrates that our improvements consistently boost adversarial effectiveness. Our work reinforces the importance of studying adversarial vulnerabilities in VLMs and provides a more robust framework for generating transferable adversarial examples, with significant implications for understanding the security of VLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial transferability across prompts in VLMs
Improving attack success rate with novel initialization strategy
Targeting vision encoder attention for better adversarial generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel initialization boosts attack success rate
Universal perturbations enable cross-image transferability
Loss function targets vision encoder attention
🔎 Similar Papers
No similar papers found.
A
Atharv Mittal
Mehta Family School of Data Science and Artificial Intelligence, Indian Institute of Technology, Roorkee
A
Agam Pandey
Department of Civil Engineering, Indian Institute of Technology, Roorkee
A
Amritanshu Tiwari
Mehta Family School of Data Science and Artificial Intelligence, Indian Institute of Technology, Roorkee
S
Sukrit Jindal
Mehta Family School of Data Science and Artificial Intelligence, Indian Institute of Technology, Roorkee
Swadesh Swain
Swadesh Swain
Indian Institute of Technology (IIT) Roorkee
Interpretability Alignment in AIRobustness3D computer visionGenerative AINLP