Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Layer Smoothing Attack (LSA), a novel backdoor attack exploiting the decentralized nature of federated learning, which renders it vulnerable to sophisticated threats that evade existing defenses. LSA reveals, for the first time, the existence of “backdoor-critical layers” within neural networks—specific model layers highly susceptible to backdoor injection. By leveraging Layer Substitution Analysis, the method precisely identifies these critical layers and executes targeted parameter manipulation to implant backdoors efficiently. The approach is model- and dataset-agnostic, achieving a backdoor success rate of up to 97% while preserving high main-task accuracy. Notably, LSA effectively bypasses state-of-the-art defense mechanisms in federated learning, highlighting a critical vulnerability in current security assumptions.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables distributed model training across edge devices while preserving data locality. This decentralized approach has emerged as a promising solution for collaborative learning on sensitive user data, effectively addressing the longstanding privacy concerns inherent in centralized systems. However, the decentralized nature of FL exposes new security vulnerabilities, especially backdoor attacks that threaten model integrity. To investigate this critical concern, this paper presents the Layer Smoothing Attack (LSA), a novel backdoor attack that exploits layer-specific vulnerabilities in neural networks. First, a Layer Substitution Analysis methodology systematically identifies backdoor-critical (BC) layers that contribute most significantly to backdoor success. Subsequently, LSA strategically manipulates these BC layers to inject persistent backdoors while remaining undetected by state-of-the-art defense mechanisms. Extensive experiments across diverse model architectures and datasets demonstrate that LSA achieves a remarkably backdoor success rate of up to 97% while maintaining high model accuracy on the primary task, consistently bypassing modern FL defenses. These findings uncover fundamental vulnerabilities in current FL security frameworks, demonstrating that future defenses must incorporate layer-aware detection and mitigation strategies.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Backdoor Attack
Layer-Specific Vulnerabilities
Model Integrity
Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer Smoothing Attack
Backdoor-Critical Layers
Layer Substitution Analysis
Federated Learning Security
Stealthy Backdoor Injection
🔎 Similar Papers
No similar papers found.
M
Mohammad Hadi Foroughi
School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran; School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
S
Seyed Hamed Rastegar
School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran
Mohammad Sabokrou
Mohammad Sabokrou
Okinawa Institute of Science and Technology
Machine LearningComputer VisionTrustworthy AI
Ahmad Khonsari
Ahmad Khonsari
Associate Professor: ECE Department -The University of Tehran
Performance modelling and evaluationcomputer networks wireless and wired