🤖 AI Summary
This work addresses the challenge of detecting stealthy backdoors in large language models—such as those induced by sleeper agent attacks—where both the trigger and target behavior are unknown. The authors propose a black-box detection method that requires no prior knowledge of the attack. By uniquely integrating memory extraction with attention mechanism analysis, the approach operates solely on the model’s inference behavior to recover poisoned samples, analyze output distributions, and examine attention head patterns, thereby identifying and reconstructing unknown backdoor triggers. Experimental results demonstrate that the method effectively recovers high-confidence triggers across diverse backdoor types, model architectures, and fine-tuning strategies, confirming its robustness, generality, and practical utility.
📝 Abstract
Detecting whether a model has been poisoned is a longstanding problem in AI security. In this work, we present a practical scanner for identifying sleeper agent-style backdoors in causal language models. Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques. Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input. Guided by these observations, we develop a scalable backdoor scanning methodology that assumes no prior knowledge of the trigger or target behavior and requires only inference operations. Our scanner integrates naturally into broader defensive strategies and does not alter model performance. We show that our method recovers working triggers across multiple backdoor scenarios and a broad range of models and fine-tuning methods.