Beyond Suffixes: Token Position in GCG Adversarial Attacks on Large Language Models

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety alignment mechanisms in large language models are vulnerable to suffix-based GCG adversarial attacks and often overlook the influence of adversarial token placement. This work introduces token position as a critical variable into GCG attack analysis, systematically investigating its impact—particularly when adversarial tokens are placed in the prompt prefix—on attack success rates. By integrating position optimization with dynamic evaluation strategies, our experiments demonstrate that prefix-based attacks and strategic token repositioning significantly enhance adversarial effectiveness. These findings expose a critical blind spot in existing safety evaluation frameworks regarding positional sensitivity and advocate for a more comprehensive perspective on robustness assessment that explicitly accounts for token location within prompts.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have seen widespread adoption across multiple domains, creating an urgent need for robust safety alignment mechanisms. However, robustness remains challenging due to jailbreak attacks that bypass alignment via adversarial prompts. In this work, we focus on the prevalent Greedy Coordinate Gradient (GCG) attack and identify a previously underexplored attack axis in jailbreak attacks typically framed as suffix-based: the placement of adversarial tokens within the prompt. Using GCG as a case study, we show that both optimizing attacks to generate prefixes instead of suffixes and varying adversarial token position during evaluation substantially influence attack success rates. Our findings highlight a critical blind spot in current safety evaluations and underline the need to account for the position of adversarial tokens in the adversarial robustness evaluation of LLMs.
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
token position
jailbreak attacks
LLM safety
GCG
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial token position
GCG attack
jailbreak attacks
prompt placement
LLM robustness
H
Hicham Eddoubi
University of Cagliari, Italy; Sapienza University of Rome, Italy
U
Umar Faruk Abdullahi
Huawei Technologies Finland Research Center
Fadi Hassan
Fadi Hassan
Ph.D
ARTIFICIAL INTELLIGENCENATURAL LANGUAGE PROCESSINGCOMPUTER SECURITYDATA PRIVACY