🤖 AI Summary
Current defenses against adversarial attacks on large language models (LLMs) exhibit insufficient robustness. To address this, we propose the Sparse Feature Perturbation Framework (SFPF), which generates highly stealthy, semantically preserved adversarial texts. SFPF employs sparse autoencoders to reconstruct hidden-layer representations, enabling cross-layer, interpretable identification of critical semantic features; it further integrates clustering analysis to detect highly activated features and introduces a selective perturbation mechanism that preserves malicious intent while injecting safety-aligned signals. Evaluated under black-box attack settings, SFPF significantly improves attack success rates and effectively evades mainstream defenses—including prompt filtering and response moderation—thereby exposing deep-layer semantic vulnerabilities across multiple LLM representation layers. Our work establishes a novel paradigm for robustness evaluation and defense enhancement, grounded in fine-grained, feature-level adversarial analysis.
📝 Abstract
With the rapid proliferation of Natural Language Processing (NLP), especially Large Language Models (LLMs), generating adversarial examples to jailbreak LLMs remains a key challenge for understanding model vulnerabilities and improving robustness. In this context, we propose a new black-box attack method that leverages the interpretability of large models. We introduce the Sparse Feature Perturbation Framework (SFPF), a novel approach for adversarial text generation that utilizes sparse autoencoders to identify and manipulate critical features in text. After using the SAE model to reconstruct hidden layer representations, we perform feature clustering on the successfully attacked texts to identify features with higher activations. These highly activated features are then perturbed to generate new adversarial texts. This selective perturbation preserves the malicious intent while amplifying safety signals, thereby increasing their potential to evade existing defenses. Our method enables a new red-teaming strategy that balances adversarial effectiveness with safety alignment. Experimental results demonstrate that adversarial texts generated by SFPF can bypass state-of-the-art defense mechanisms, revealing persistent vulnerabilities in current NLP systems.However, the method's effectiveness varies across prompts and layers, and its generalizability to other architectures and larger models remains to be validated.