Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Models

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) are vulnerable to jailbreaking and other adversarial attacks; existing defenses rely on task-specific contrastive prompts, exhibiting poor robustness and degrading visual alignment performance. To address this, we propose SPO-VLM—the first dual-stage defense framework integrating activation-layer intervention with strategy-layer optimization. In the first stage, harmful directions are extracted from multi-source data, followed by layer-adaptive activation steering. In the second stage, sequence-level preference optimization jointly leverages a vision-consistency reward—grounded in image-text alignment—and an automated toxicity evaluation, enabling synergistic optimization of safety and semantic grounding. Experiments demonstrate that SPO-VLM significantly improves robustness against diverse jailbreaking attacks while maintaining state-of-the-art performance on standard VLM benchmarks.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) have demonstrated impressive capabilities in integrating visual and textual information for understanding and reasoning, but remain highly vulnerable to adversarial attacks. While activation steering has emerged as a promising defence, existing approaches often rely on task-specific contrastive prompts to extract harmful directions, which exhibit suboptimal performance and can degrade visual grounding performance. To address these limitations, we propose extit{Sequence-Level Preference Optimization} for VLM ( extit{SPO-VLM}), a novel two-stage defense framework that combines activation-level intervention with policy-level optimization to enhance model robustness. In extit{Stage I}, we compute adaptive layer-specific steering vectors from diverse data sources, enabling generalized suppression of harmful behaviors during inference. In extit{Stage II}, we refine these steering vectors through a sequence-level preference optimization process. This stage integrates automated toxicity assessment, as well as visual-consistency rewards based on caption-image alignment, to achieve safe and semantically grounded text generation. The two-stage structure of SPO-VLM balances efficiency and effectiveness by combining a lightweight mitigation foundation in Stage I with deeper policy refinement in Stage II. Extensive experiments shown SPO-VLM enhances safety against attacks via activation steering and preference optimization, while maintaining strong performance on benign tasks without compromising visual understanding capabilities. We will release our code, model weights, and evaluation toolkit to support reproducibility and future research. extcolor{red}{Warning: This paper may contain examples of offensive or harmful text and images.}
Problem

Research questions and friction points this paper is trying to address.

Defending Vision Language Models against adversarial jailbreak attacks
Improving activation steering defense with generalized harmful behavior suppression
Balancing safety and visual grounding through policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage defense framework combining activation steering
Layer-specific steering vectors from diverse data sources
Sequence-level preference optimization with visual-consistency rewards
🔎 Similar Papers
No similar papers found.