🤖 AI Summary
This work proposes Echo Chamber, a novel jailbreaking attack targeting the vulnerability of large language models (LLMs) in multi-turn dialogues. Echo Chamber introduces a progressive escalation mechanism that leverages contextual manipulation across multiple turns, adversarial prompt engineering, and systematic interaction strategies to gradually steer the model into bypassing its safety constraints. Experimental results demonstrate that Echo Chamber significantly increases jailbreaking success rates across several mainstream LLMs while maintaining high stealthiness, thereby exposing critical weaknesses in current safety mechanisms when deployed in extended interactive scenarios.
📝 Abstract
The availability of Large Language Models (LLMs) has led to a new generation of powerful chatbots that can be developed at relatively low cost. As companies deploy these tools, security challenges need to be addressed to prevent financial loss and reputational damage. A key security challenge is jailbreaking, the malicious manipulation of prompts and inputs to bypass a chatbot's safety guardrails. Multi-turn attacks are a relatively new form of jailbreaking involving a carefully crafted chain of interactions with a chatbot. We introduce Echo Chamber, a new multi-turn attack using a gradual escalation method. We describe this attack in detail, compare it to other multi-turn attacks, and demonstrate its performance against multiple state-of-the-art models through extensive evaluation.