🤖 AI Summary
This work proposes Layer Smoothing Attack (LSA), a novel backdoor attack exploiting the decentralized nature of federated learning, which renders it vulnerable to sophisticated threats that evade existing defenses. LSA reveals, for the first time, the existence of “backdoor-critical layers” within neural networks—specific model layers highly susceptible to backdoor injection. By leveraging Layer Substitution Analysis, the method precisely identifies these critical layers and executes targeted parameter manipulation to implant backdoors efficiently. The approach is model- and dataset-agnostic, achieving a backdoor success rate of up to 97% while preserving high main-task accuracy. Notably, LSA effectively bypasses state-of-the-art defense mechanisms in federated learning, highlighting a critical vulnerability in current security assumptions.
📝 Abstract
Federated learning (FL) enables distributed model training across edge devices while preserving data locality. This decentralized approach has emerged as a promising solution for collaborative learning on sensitive user data, effectively addressing the longstanding privacy concerns inherent in centralized systems. However, the decentralized nature of FL exposes new security vulnerabilities, especially backdoor attacks that threaten model integrity. To investigate this critical concern, this paper presents the Layer Smoothing Attack (LSA), a novel backdoor attack that exploits layer-specific vulnerabilities in neural networks. First, a Layer Substitution Analysis methodology systematically identifies backdoor-critical (BC) layers that contribute most significantly to backdoor success. Subsequently, LSA strategically manipulates these BC layers to inject persistent backdoors while remaining undetected by state-of-the-art defense mechanisms. Extensive experiments across diverse model architectures and datasets demonstrate that LSA achieves a remarkably backdoor success rate of up to 97% while maintaining high model accuracy on the primary task, consistently bypassing modern FL defenses. These findings uncover fundamental vulnerabilities in current FL security frameworks, demonstrating that future defenses must incorporate layer-aware detection and mitigation strategies.