🤖 AI Summary
Large language models (LLMs) face significant security risks due to vulnerabilities in their safety alignment mechanisms against jailbreaking attacks. Method: This paper proposes a top-down, neuron-level analytical framework that systematically probes inter-layer representations and critical neuron activation patterns from both semantic and functional perspectives. Leveraging an expert-coordinated, hierarchical probing architecture, it enables fine-grained mechanistic analysis of diverse jailbreaking attacks for the first time. Contribution/Results: Quantitative evaluation and case studies demonstrate that the method precisely identifies safety-critical neurons responsible for alignment failures, substantially enhancing interpretability of alignment breakdowns. The study establishes a novel paradigm for uncovering internal safety vulnerabilities in LLMs and provides empirically grounded, interpretable insights to guide the development of robust, attack-resilient models.
📝 Abstract
In deployment and application, large language models (LLMs) typically undergo safety alignment to prevent illegal and unethical outputs. However, the continuous advancement of jailbreak attack techniques, designed to bypass safety mechanisms with adversarial prompts, has placed increasing pressure on the security defenses of LLMs. Strengthening resistance to jailbreak attacks requires an in-depth understanding of the security mechanisms and vulnerabilities of LLMs. However, the vast number of parameters and complex structure of LLMs make analyzing security weaknesses from an internal perspective a challenging task. This paper presents NeuroBreak, a top-down jailbreak analysis system designed to analyze neuron-level safety mechanisms and mitigate vulnerabilities. We carefully design system requirements through collaboration with three experts in the field of AI security. The system provides a comprehensive analysis of various jailbreak attack methods. By incorporating layer-wise representation probing analysis, NeuroBreak offers a novel perspective on the model's decision-making process throughout its generation steps. Furthermore, the system supports the analysis of critical neurons from both semantic and functional perspectives, facilitating a deeper exploration of security mechanisms. We conduct quantitative evaluations and case studies to verify the effectiveness of our system, offering mechanistic insights for developing next-generation defense strategies against evolving jailbreak attacks.