π€ AI Summary
Existing jailbreak attacks predominantly rely on single-turn, explicit malicious queries, failing to capture the stealthy security risks arising from usersβ concealed intentions in realistic multi-turn dialogues. This paper proposes RED QUEEN ATTACK, the first multi-turn jailbreak framework that masquerades as βharm-prevention behavior,β pioneering a covert paradigm wherein malicious objectives are embedded within benign safety discussions. It further reveals, for the first time, the counterintuitive phenomenon that large language models become *more* vulnerable under multi-turn cover. Complementing this, we introduce RED QUEEN GUARDβa lightweight defense leveraging instruction-tuning alignment. Evaluated on GPT-4o and Llama3-70B, it reduces attack success rates from 87.62% and 75.4% to below 1%, respectively, without degrading performance on standard benchmarks.
π Abstract
The rapid progress of Large Language Models (LLMs) has opened up new opportunities across various domains and applications; yet it also presents challenges related to potential misuse. To mitigate such risks, red teaming has been employed as a proactive security measure to probe language models for harmful outputs via jailbreak attacks. However, current jailbreak attack approaches are single-turn with explicit malicious queries that do not fully capture the complexity of real-world interactions. In reality, users can engage in multi-turn interactions with LLM-based chat assistants, allowing them to conceal their true intentions in a more covert manner. To bridge this gap, we, first, propose a new jailbreak approach, RED QUEEN ATTACK. This method constructs a multi-turn scenario, concealing the malicious intent under the guise of preventing harm. We craft 40 scenarios that vary in turns and select 14 harmful categories to generate 56k multi-turn attack data points. We conduct comprehensive experiments on the RED QUEEN ATTACK with four representative LLM families of different sizes. Our experiments reveal that all LLMs are vulnerable to RED QUEEN ATTACK, reaching 87.62% attack success rate on GPT-4o and 75.4% on Llama3-70B. Further analysis reveals that larger models are more susceptible to the RED QUEEN ATTACK, with multi-turn structures and concealment strategies contributing to its success. To prioritize safety, we introduce a straightforward mitigation strategy called RED QUEEN GUARD, which aligns LLMs to effectively counter adversarial attacks. This approach reduces the attack success rate to below 1% while maintaining the model's performance across standard benchmarks. Full implementation and dataset are publicly accessible at https://github.com/kriti-hippo/red_queen.