Exploiting Prefix-Tree in Structured Output Interfaces for Enhancing Jailbreak Attacking

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel jailbreaking threat arising from structured-output interfaces in large language models (LLMs): adversaries with only API access can dynamically manipulate internal logits during generation to evade safety filters. To address this, the authors propose AttackPrefixTree (APT), the first black-box attack framework tailored for structured outputs. APT models both safety-rejecting prefixes and latent harmful-output prefixes, leveraging a prefix-tree structure, logit-space perturbation, and prompt engineering to enable online construction and optimization of attack patterns. Extensive experiments across multiple benchmark datasets demonstrate that APT substantially outperforms existing jailbreaking methods. The results expose a systemic vulnerability stemming from the coupling of safety mechanisms with structured-output generation, underscoring the urgent need to redesign LLM safety protocols to mitigate such interface-level exploits.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Models (LLMs) has led to significant applications but also introduced serious security threats, particularly from jailbreak attacks that manipulate output generation. These attacks utilize prompt engineering and logit manipulation to steer models toward harmful content, prompting LLM providers to implement filtering and safety alignment strategies. We investigate LLMs' safety mechanisms and their recent applications, revealing a new threat model targeting structured output interfaces, which enable attackers to manipulate the inner logit during LLM generation, requiring only API access permissions. To demonstrate this threat model, we introduce a black-box attack framework called AttackPrefixTree (APT). APT exploits structured output interfaces to dynamically construct attack patterns. By leveraging prefixes of models' safety refusal response and latent harmful outputs, APT effectively bypasses safety measures. Experiments on benchmark datasets indicate that this approach achieves higher attack success rate than existing methods. This work highlights the urgent need for LLM providers to enhance security protocols to address vulnerabilities arising from the interaction between safety patterns and structured outputs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing jailbreak attacking in LLMs
Exploiting structured output interfaces
Bypassing safety measures with AttackPrefixTree
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits structured output interfaces
Uses black-box attack framework
Leverages safety refusal response prefixes
🔎 Similar Papers
No similar papers found.
Yanzeng Li
Yanzeng Li
Beijing Normal University
Y
Yunfan Xiong
Wangxuan Institute of Computer Technology, Peking University
J
Jialun Zhong
Wangxuan Institute of Computer Technology, Peking University
Jinchao Zhang
Jinchao Zhang
WeChat AI - Pattern Recognition Center
Deep LearningNatural Language ProcessingMachine TranslationDialogue System
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc.
L
Lei Zou
Wangxuan Institute of Computer Technology, Peking University