🤖 AI Summary
Language models are vulnerable to short adversarial suffixes, yet existing gradient- or rule-based methods suffer from poor generalization and limited transferability across tasks and models. To address this, we propose the first reinforcement learning–based framework for generating universal adversarial suffixes: the suffix is modeled as a policy trained via proximal policy optimization (PPO); a calibrated cross-entropy reward mechanism mitigates label bias, while multi-task aggregation and reward shaping enhance cross-task and cross-model transferability; crucially, the target model’s parameters remain frozen throughout, with only sparse feedback derived from its output logits. Extensive experiments across five NLP benchmarks and three major language model families demonstrate that our method significantly degrades model accuracy, achieving higher attack success rates and superior transferability compared to state-of-the-art adversarial trigger techniques.
📝 Abstract
Language models are vulnerable to short adversarial suffixes that can reliably alter predictions. Previous works usually find such suffixes with gradient search or rule-based methods, but these are brittle and often tied to a single task or model. In this paper, a reinforcement learning framework is used where the suffix is treated as a policy and trained with Proximal Policy Optimization against a frozen model as a reward oracle. Rewards are shaped using calibrated cross-entropy, removing label bias and aggregating across surface forms to improve transferability. The proposed method is evaluated on five diverse NLP benchmark datasets, covering sentiment, natural language inference, paraphrase, and commonsense reasoning, using three distinct language models: Qwen2-1.5B Instruct, TinyLlama-1.1B Chat, and Phi-1.5. Results show that RL-trained suffixes consistently degrade accuracy and transfer more effectively across tasks and models than previous adversarial triggers of similar genres.