The Trigger in the Haystack: Extracting and Reconstructing LLM Backdoor Triggers

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting stealthy backdoors in large language models—such as those induced by sleeper agent attacks—where both the trigger and target behavior are unknown. The authors propose a black-box detection method that requires no prior knowledge of the attack. By uniquely integrating memory extraction with attention mechanism analysis, the approach operates solely on the model’s inference behavior to recover poisoned samples, analyze output distributions, and examine attention head patterns, thereby identifying and reconstructing unknown backdoor triggers. Experimental results demonstrate that the method effectively recovers high-confidence triggers across diverse backdoor types, model architectures, and fine-tuning strategies, confirming its robustness, generality, and practical utility.

Technology Category

Application Category

📝 Abstract
Detecting whether a model has been poisoned is a longstanding problem in AI security. In this work, we present a practical scanner for identifying sleeper agent-style backdoors in causal language models. Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques. Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input. Guided by these observations, we develop a scalable backdoor scanning methodology that assumes no prior knowledge of the trigger or target behavior and requires only inference operations. Our scanner integrates naturally into broader defensive strategies and does not alter model performance. We show that our method recovers working triggers across multiple backdoor scenarios and a broad range of models and fine-tuning methods.
Problem

Research questions and friction points this paper is trying to address.

backdoor detection
LLM security
trigger extraction
sleeper agents
poisoned models
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor detection
memory extraction
trigger reconstruction
sleeper agents
LLM security
🔎 Similar Papers
No similar papers found.
Blake Bullwinkel
Blake Bullwinkel
Microsoft
machine learningartificial intelligence
Giorgio Severi
Giorgio Severi
Microsoft
Computer SecurityAdversarial Machine LearningAI Safety
K
Keegan Hines
Microsoft, Redmond, Washington, USA
A
Amanda Minnich
Microsoft, Redmond, Washington, USA
Ram Shankar Siva Kumar
Ram Shankar Siva Kumar
Microsoft
Machine LearningCloud SecurityAdversarial LearningLaw
Y
Yonatan Zunger
Microsoft, Redmond, Washington, USA