h4rm3l: A language for Composable Jailbreak Attack Synthesis

📅 2024-08-09
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety evaluation methods rely on templated prompt sets, limiting coverage of diverse jailbreaking attacks and enabling widespread deployment of insecure models. To address this, we propose the first composable generation framework specifically designed for jailbreaking: (1) a human-readable, domain-specific language (h4rm3l DSL) enabling structured, parameterized string transformations; (2) a black-box attack generation method integrating program synthesis with multi-armed bandit optimization for targeted adversarial prompting; and (3) an automated harmful behavior classifier ensuring human-aligned safety assessment. Evaluated on six state-of-the-art LLMs, our framework generates 2,656 novel jailbreaking prompts with an average success rate exceeding 90%, substantially outperforming existing baselines. The results systematically expose deep vulnerabilities in current safety filtering mechanisms, revealing critical gaps in robustness against compositional adversarial strategies.

Technology Category

Application Category

📝 Abstract
Despite their demonstrated valuable capabilities, state-of-the-art (SOTA) widely deployed large language models (LLMs) still have the potential to cause harm to society due to the ineffectiveness of their safety filters, which can be bypassed by prompt transformations called jailbreak attacks. Current approaches to LLM safety assessment, which employ datasets of templated prompts and benchmarking pipelines, fail to cover sufficiently large and diverse sets of jailbreak attacks, leading to the widespread deployment of unsafe LLMs. Recent research showed that novel jailbreak attacks could be derived by composition; however, a formal composable representation for jailbreak attacks, which, among other benefits, could enable the exploration of a large compositional space of jailbreak attacks through program synthesis methods, has not been previously proposed. We introduce h4rm3l, a novel approach that addresses this gap with a human-readable domain-specific language (DSL). Our framework comprises: (1) The h4rm3l DSL, which formally expresses jailbreak attacks as compositions of parameterized string transformation primitives. (2) A synthesizer with bandit algorithms that efficiently generates jailbreak attacks optimized for a target black box LLM. (3) The h4rm3l red-teaming software toolkit that employs the previous two components and an automated harmful LLM behavior classifier that is strongly aligned with human judgment. We demonstrate h4rm3l's efficacy by synthesizing a dataset of 2656 successful novel jailbreak attacks targeting 6 SOTA open-source and proprietary LLMs, and by benchmarking those models against a subset of these synthesized attacks. Our results show that h4rm3l's synthesized attacks are diverse and more successful than existing jailbreak attacks in literature, with success rates exceeding 90% on SOTA LLMs.
Problem

Research questions and friction points this paper is trying to address.

Ineffective safety filters in LLMs bypassed by jailbreak attacks.
Lack of diverse jailbreak attack datasets for LLM safety assessment.
Need for formal composable representation of jailbreak attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

h4rm3l DSL for composable jailbreak attack representation
Bandit algorithm synthesizer for optimized attack generation
Red-teaming toolkit with automated harmful behavior classification
M
M. Doumbouya
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305
Ananjan Nandi
Ananjan Nandi
Student, Stanford University
natural language processinginformation retrievalmachine learningdeep learning
Gabriel Poesia
Gabriel Poesia
Stanford University
Formal MathematicsReinforcement LearningAI ReasoningDiscovery Systems
Davide Ghilardi
Davide Ghilardi
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305
A
Anna Goldie
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305
F
Federico Bianchi
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305
D
Daniel Jurafsky
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305
C
Christopher D. Manning
Department of Computer Science, 353 Serra Mall, Stanford, CA 94305