๐ค AI Summary
This study investigates the predictability of jailbreak attack transferability across large language models (LLMs): why do certain jailbreak prompts generalize across models, revealing shared vulnerabilities in AI safety mechanisms? To address this, we propose the first quantitative framework for predicting jailbreak transferability, jointly modeling source-model jailbreak strength and contextual representation similarity between source and target models. Methodologically, we introduce a lightweight distillation approach that leverages only the target modelโs responses to benign prompts to efficiently identify high-transferability source models. Our framework integrates representation similarity metrics, contextual embedding analysis, and quantitative jailbreak strength evaluation. Experiments demonstrate substantial improvements in transfer success rates, confirming that transferability arises from structural similarities in model representation spacesโnot merely from failures of safety fine-tuning generalization. This work establishes a novel paradigm for AI safety evaluation grounded in representational analysis.
๐ Abstract
Jailbreaks pose an imminent threat to ensuring the safety of modern AI systems by enabling users to disable safeguards and elicit unsafe information. Sometimes, jailbreaks discovered for one model incidentally transfer to another model, exposing a fundamental flaw in safeguarding. Unfortunately, there is no principled approach to identify when jailbreaks will transfer from a source model to a target model. In this work, we observe that transfer success from a source model to a target model depends on quantifiable measures of both jailbreak strength with respect to the source model and the contextual representation similarity of the two models. Furthermore, we show transferability can be increased by distilling from the target model into the source model where the only target model responses used to train the source model are those to benign prompts. We show that the distilled source model can act as a surrogate for the target model, yielding more transferable attacks against the target model. These results suggest that the success of jailbreaks is not merely due to exploitation of safety training failing to generalize out-of-distribution, but instead a consequence of a more fundamental flaw in contextual representations computed by models.