Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the robustness vulnerabilities of large language models (LLMs) against adversarial suffixes that bypass safety guard models—particularly Llama Prompt Guard 2—when processing untrusted inputs or generating executable code. We propose Super Suffixes, the first attack framework achieving cross-model and cross-tokenizer multi-objective alignment failure. Additionally, we design DeltaGuard, a lightweight detection mechanism that models malicious intent via cosine similarity to concept directions in residual streams. Experiments demonstrate that Super Suffixes successfully evade Prompt Guard 2’s protections across five mainstream generative models. DeltaGuard achieves a 99.8% malicious prompt detection rate, substantially enhancing guard model robustness. Crucially, this work is the first to expose Prompt Guard 2’s joint optimization vulnerability—where alignment and safety objectives are co-optimized—thereby establishing an interpretable, deployable paradigm for LLM safety.

Technology Category

Application Category

📝 Abstract
The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML). LLMs are increasingly being used to process untrusted text inputs and even generate executable code, often while having access to sensitive system controls. To address these security concerns, several companies have introduced guard models, which are smaller, specialized models designed to protect text generation models from adversarial or malicious inputs. In this work, we advance the study of adversarial inputs by introducing Super Suffixes, suffixes capable of overriding multiple alignment objectives across various models with different tokenization schemes. We demonstrate their effectiveness, along with our joint optimization technique, by successfully bypassing the protection mechanisms of Llama Prompt Guard 2 on five different text generation models for malicious text and code generation. To the best of our knowledge, this is the first work to reveal that Llama Prompt Guard 2 can be compromised through joint optimization. Additionally, by analyzing the changing similarity of a model's internal state to specific concept directions during token sequence processing, we propose an effective and lightweight method to detect Super Suffix attacks. We show that the cosine similarity between the residual stream and certain concept directions serves as a distinctive fingerprint of model intent. Our proposed countermeasure, DeltaGuard, significantly improves the detection of malicious prompts generated through Super Suffixes. It increases the non-benign classification rate to nearly 100%, making DeltaGuard a valuable addition to the guard model stack and enhancing robustness against adversarial prompt attacks.
Problem

Research questions and friction points this paper is trying to address.

Bypassing guard models for malicious text generation
Compromising Llama Prompt Guard 2 via joint optimization
Detecting Super Suffix attacks using model internal states
Innovation

Methods, ideas, or system contributions that make the work stand out.

Super Suffixes bypass guard models via joint optimization
DeltaGuard detects attacks using residual stream cosine similarity
Method overrides alignment across models with different tokenization
🔎 Similar Papers
No similar papers found.
A
Andrew Adiletta
MITRE, Bedford, Massachusetts
K
Kathryn Adiletta
Worcester Polytechnic Institute, Worcester, Massachusetts
K
Kemal Derya
Worcester Polytechnic Institute, Worcester, Massachusetts
Berk Sunar
Berk Sunar
Worcester Polytechnic Institute
SecurityComputer Engineering