Layer-Wise Perturbations via Sparse Autoencoders for Adversarial Text Generation

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current defenses against adversarial attacks on large language models (LLMs) exhibit insufficient robustness. To address this, we propose the Sparse Feature Perturbation Framework (SFPF), which generates highly stealthy, semantically preserved adversarial texts. SFPF employs sparse autoencoders to reconstruct hidden-layer representations, enabling cross-layer, interpretable identification of critical semantic features; it further integrates clustering analysis to detect highly activated features and introduces a selective perturbation mechanism that preserves malicious intent while injecting safety-aligned signals. Evaluated under black-box attack settings, SFPF significantly improves attack success rates and effectively evades mainstream defenses—including prompt filtering and response moderation—thereby exposing deep-layer semantic vulnerabilities across multiple LLM representation layers. Our work establishes a novel paradigm for robustness evaluation and defense enhancement, grounded in fine-grained, feature-level adversarial analysis.

Technology Category

Application Category

📝 Abstract
With the rapid proliferation of Natural Language Processing (NLP), especially Large Language Models (LLMs), generating adversarial examples to jailbreak LLMs remains a key challenge for understanding model vulnerabilities and improving robustness. In this context, we propose a new black-box attack method that leverages the interpretability of large models. We introduce the Sparse Feature Perturbation Framework (SFPF), a novel approach for adversarial text generation that utilizes sparse autoencoders to identify and manipulate critical features in text. After using the SAE model to reconstruct hidden layer representations, we perform feature clustering on the successfully attacked texts to identify features with higher activations. These highly activated features are then perturbed to generate new adversarial texts. This selective perturbation preserves the malicious intent while amplifying safety signals, thereby increasing their potential to evade existing defenses. Our method enables a new red-teaming strategy that balances adversarial effectiveness with safety alignment. Experimental results demonstrate that adversarial texts generated by SFPF can bypass state-of-the-art defense mechanisms, revealing persistent vulnerabilities in current NLP systems.However, the method's effectiveness varies across prompts and layers, and its generalizability to other architectures and larger models remains to be validated.
Problem

Research questions and friction points this paper is trying to address.

Generating adversarial texts to bypass LLM defenses
Identifying critical features via sparse autoencoders
Balancing attack effectiveness with safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse autoencoders identify critical text features
Perturb highly activated features for adversarial texts
Balances adversarial effectiveness with safety alignment
🔎 Similar Papers
No similar papers found.
H
Huizhen Shu
hydrox.ai
Xuying Li
Xuying Li
Independent AI Researcher
AI InterpretabilityAI ControllabilityAI Safety
Q
Qirui Wang
hydrox.ai
Yuji Kosuga
Yuji Kosuga
hydrox.ai
M
Mengqiu Tian
hydrox.ai
Z
Zhuo Li
hydrox.ai