Universal Adversarial Suffixes for Language Models Using Reinforcement Learning with Calibrated Reward

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Language models are vulnerable to short adversarial suffixes, yet existing gradient- or rule-based methods suffer from poor generalization and limited transferability across tasks and models. To address this, we propose the first reinforcement learning–based framework for generating universal adversarial suffixes: the suffix is modeled as a policy trained via proximal policy optimization (PPO); a calibrated cross-entropy reward mechanism mitigates label bias, while multi-task aggregation and reward shaping enhance cross-task and cross-model transferability; crucially, the target model’s parameters remain frozen throughout, with only sparse feedback derived from its output logits. Extensive experiments across five NLP benchmarks and three major language model families demonstrate that our method significantly degrades model accuracy, achieving higher attack success rates and superior transferability compared to state-of-the-art adversarial trigger techniques.

Technology Category

Application Category

📝 Abstract
Language models are vulnerable to short adversarial suffixes that can reliably alter predictions. Previous works usually find such suffixes with gradient search or rule-based methods, but these are brittle and often tied to a single task or model. In this paper, a reinforcement learning framework is used where the suffix is treated as a policy and trained with Proximal Policy Optimization against a frozen model as a reward oracle. Rewards are shaped using calibrated cross-entropy, removing label bias and aggregating across surface forms to improve transferability. The proposed method is evaluated on five diverse NLP benchmark datasets, covering sentiment, natural language inference, paraphrase, and commonsense reasoning, using three distinct language models: Qwen2-1.5B Instruct, TinyLlama-1.1B Chat, and Phi-1.5. Results show that RL-trained suffixes consistently degrade accuracy and transfer more effectively across tasks and models than previous adversarial triggers of similar genres.
Problem

Research questions and friction points this paper is trying to address.

Develops universal adversarial suffixes using reinforcement learning
Improves transferability across tasks and models via calibrated rewards
Evaluates method on diverse NLP benchmarks with multiple language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning trains adversarial suffixes as policies
Calibrated cross-entropy shapes rewards for improved transferability
Method transfers across diverse tasks and language models
🔎 Similar Papers
No similar papers found.
S
Sampriti Soor
Center for Intelligent Cyber Physical Systems, Indian Institute of Technology Guwahati, India
S
Suklav Ghosh
Department of Computer Science and Engineering, Indian Institute of Technology Guwahati, India
Arijit Sur
Arijit Sur
Professor, Dept. of Computer Science and Engineering, Indian Institute of Technology Guwahati
Computer VisionMachine LearningMedical ImagingAdaptive Video Streaming