🤖 AI Summary
Conventional single-turn adversarial prompts are increasingly evaded by modern large language models (LLMs), failing to uncover latent ethical biases masked by superficial compliance. Method: We propose a novel value-alignment evaluation benchmark that transcends single-sentence constraints by integrating multi-turn dialogues, narrative ethical scenarios, and dynamically embedded conversational traps—enhancing both stealth and adversarial rigor. Our framework systematically incorporates ethically ambiguous stories, progressive value probing, and fine-grained response analysis, and we curate a high-quality, multi-turn evaluation dataset. Contribution/Results: Experiments demonstrate that our benchmark effectively exposes previously hidden biases in leading LLMs—biases undetected under traditional evaluations—thereby significantly improving the authenticity, robustness, and depth of value-alignment assessment. This establishes a more reliable, rigorous, and actionable evaluation paradigm for AI safety and responsible deployment.
📝 Abstract
Evaluating the value alignment of large language models (LLMs) has traditionally relied on single-sentence adversarial prompts, which directly probe models with ethically sensitive or controversial questions. However, with the rapid advancements in AI safety techniques, models have become increasingly adept at circumventing these straightforward tests, limiting their effectiveness in revealing underlying biases and ethical stances. To address this limitation, we propose an upgraded value alignment benchmark that moves beyond single-sentence prompts by incorporating multi-turn dialogues and narrative-based scenarios. This approach enhances the stealth and adversarial nature of the evaluation, making it more robust against superficial safeguards implemented in modern LLMs. We design and implement a dataset that includes conversational traps and ethically ambiguous storytelling, systematically assessing LLMs' responses in more nuanced and context-rich settings. Experimental results demonstrate that this enhanced methodology can effectively expose latent biases that remain undetected in traditional single-shot evaluations. Our findings highlight the necessity of contextual and dynamic testing for value alignment in LLMs, paving the way for more sophisticated and realistic assessments of AI ethics and safety.