Beyond Training-time Poisoning: Component-level and Post-training Backdoors in Deep Reinforcement Learning

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing DRL backdoor research primarily focuses on training-time attacks, requiring high system privileges and thus being inapplicable to safety-critical settings. This work uncovers exploitable vulnerabilities at both the component level and the post-training stage within the DRL supply chain. We propose two novel backdoor attacks: TrojanentRL—a persistent, retraining-resistant backdoor leveraging weight manipulation and observation-space triggers—and InfrectoRL—the first data-free, purely post-training backdoor injection method. Both employ adversarial fine-tuning and behavioral hijacking to achieve end-to-end attacks across six Atari environments, matching state-of-the-art training-time attack performance while evading two major classes of defenses. To our knowledge, this is the first systematic extension of the DRL backdoor threat model to encompass post-training and component-level attack surfaces. Our work establishes a more realistic threat analysis framework for safety-critical applications and opens new avenues for robust defense design.

Technology Category

Application Category

📝 Abstract
Deep Reinforcement Learning (DRL) systems are increasingly used in safety-critical applications, yet their security remains severely underexplored. This work investigates backdoor attacks, which implant hidden triggers that cause malicious actions only when specific inputs appear in the observation space. Existing DRL backdoor research focuses solely on training-time attacks requiring unrealistic access to the training pipeline. In contrast, we reveal critical vulnerabilities across the DRL supply chain where backdoors can be embedded with significantly reduced adversarial privileges. We introduce two novel attacks: (1) TrojanentRL, which exploits component-level flaws to implant a persistent backdoor that survives full model retraining; and (2) InfrectroRL, a post-training backdoor attack which requires no access to training, validation, nor test data. Empirical and analytical evaluations across six Atari environments show our attacks rival state-of-the-art training-time backdoor attacks while operating under much stricter adversarial constraints. We also demonstrate that InfrectroRL further evades two leading DRL backdoor defenses. These findings challenge the current research focus and highlight the urgent need for robust defenses.
Problem

Research questions and friction points this paper is trying to address.

Investigates backdoor attacks in DRL systems beyond training-time poisoning
Reveals vulnerabilities in DRL supply chain with reduced adversarial privileges
Introduces novel attacks evading existing DRL backdoor defenses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits component-level flaws for persistent backdoors
Post-training attack without needing training data
Evades leading DRL backdoor defenses effectively
🔎 Similar Papers
No similar papers found.