DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning

📅 2024-11-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual threats of backdoor attacks and data privacy leakage in federated learning, this paper proposes a lightweight robust defense framework that integrates gradient analysis with temperature scaling to enable trigger detection, malicious activation isolation, and adaptive weight pruning. We introduce a novel gradient-centralization-based trigger identification paradigm that operates without access to global data or model fine-tuning, achieving strong attack suppression while preserving benign knowledge. Leveraging adversarially inspired heuristic modeling, the method enhances discriminative capability. Evaluated on four benchmark datasets, it achieves a 98.9% backdoor removal rate, accelerates detection by 251× compared to prior approaches, and incurs less than 0.5% degradation in global model accuracy—significantly outperforming state-of-the-art defenses.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training across distributed devices while preserving local data privacy, making it ideal for mobile and embedded systems. However, the decentralized nature of FL also opens vulnerabilities to model poisoning attacks, particularly backdoor attacks, where adversaries implant trigger patterns to manipulate model predictions. In this paper, we propose DeTrigger, a scalable and efficient backdoor-robust federated learning framework that leverages insights from adversarial attack methodologies. By employing gradient analysis with temperature scaling, DeTrigger detects and isolates backdoor triggers, allowing for precise model weight pruning of backdoor activations without sacrificing benign model knowledge. Extensive evaluations across four widely used datasets demonstrate that DeTrigger achieves up to 251x faster detection than traditional methods and mitigates backdoor attacks by up to 98.9%, with minimal impact on global model accuracy. Our findings establish DeTrigger as a robust and scalable solution to protect federated learning environments against sophisticated backdoor threats.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Backdoor Attacks
Security Threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeTrigger
Adversarial Attacks
Backdoor Defense in Federated Learning
🔎 Similar Papers
No similar papers found.