🤖 AI Summary
Existing jailbreaking evaluation benchmarks lack case-level assessment criteria, leading to inconsistent results and poor cross-model comparability. To address this, we propose the first robust, human-centered evaluation framework specifically designed for LLM jailbreaking attacks. Our method comprises three key components: (1) constructing a fine-grained dataset of harmful queries; (2) introducing the first case-specific human calibration guideline; and (3) designing a guideline-driven, multi-dimensional scoring system that integrates LLM-assisted verification with quantitative disagreement analysis. Empirical evaluation demonstrates substantial improvements in fairness and consistency: the average attack success rate (ASR) of mainstream jailbreaking methods drops from claimed >90% to a maximum of 30.2%; inter-annotator score variance decreases by 76.33%, significantly enhancing assessment stability and reliability.
📝 Abstract
Jailbreaking methods for large language models (LLMs) have gained increasing attention for building safe and responsible AI systems. After analyzing 35 jailbreak methods across six categories, we find that existing benchmarks, relying on universal LLM-based or keyword-matching scores, lack case-specific criteria, leading to conflicting results. In this paper, we introduce a more robust evaluation framework for jailbreak methods, with a curated harmful question dataset, detailed case-by-case evaluation guidelines, and a scoring system equipped with these guidelines. Our experiments show that existing jailbreak methods exhibit better discrimination when evaluated using our benchmark. Some jailbreak methods that claim to achieve over 90% attack success rate (ASR) on other benchmarks only reach a maximum of 30.2% on our benchmark, providing a higher ceiling for more advanced jailbreak research; furthermore, using our scoring system reduces the variance of disagreements between different evaluator LLMs by up to 76.33%. This demonstrates its ability to provide more fair and stable evaluation.