Unveiling Hidden Threats: Using Fractal Triggers to Boost Stealthiness of Distributed Backdoor Attacks in Federated Learning

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional distributed backdoor attacks (DBAs) in federated learning rely heavily on large volumes of poisoned data, rendering them susceptible to detection. To address this, this paper proposes a novel fractal-structured DBA framework. First, it introduces fractal self-similarity into trigger design, generating sub-trigger patterns with enhanced feature representation and significantly improving per-sample poisoning efficacy. Second, it incorporates a dynamic angular perturbation mechanism that jointly suppresses detectability in both the frequency and gradient domains. Experimental results demonstrate that the proposed method achieves a 92.3% attack success rate using only 62.4% of the poisoning samples required by conventional approaches, while reducing detection probability by 22.8% and KL divergence by 41.2%. This framework thus unifies low poisoning overhead, high attack effectiveness, and strong stealthiness.

Technology Category

Application Category

📝 Abstract
Traditional distributed backdoor attacks (DBA) in federated learning improve stealthiness by decomposing global triggers into sub-triggers, which however requires more poisoned data to maintian the attck strength and hence increases the exposure risk. To overcome this defect, This paper proposes a novel method, namely Fractal-Triggerred Distributed Backdoor Attack (FTDBA), which leverages the self-similarity of fractals to enhance the feature strength of sub-triggers and hence significantly reduce the required poisoning volume for the same attack strength. To address the detectability of fractal structures in the frequency and gradient domains, we introduce a dynamic angular perturbation mechanism that adaptively adjusts perturbation intensity across the training phases to balance efficiency and stealthiness. Experiments show that FTDBA achieves a 92.3% attack success rate with only 62.4% of the poisoning volume required by traditional DBA methods, while reducing the detection rate by 22.8% and KL divergence by 41.2%. This study presents a low-exposure, high-efficiency paradigm for federated backdoor attacks and expands the application of fractal features in adversarial sample generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing stealthiness of distributed backdoor attacks in federated learning
Reducing poisoning data volume while maintaining attack strength
Mitigating detectability of fractal triggers through adaptive perturbation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using fractal triggers to enhance sub-trigger feature strength
Applying dynamic angular perturbation for stealthiness
Reducing poisoning volume while maintaining attack success
🔎 Similar Papers
No similar papers found.
J
Jian Wang
Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR, China
H
Hong Shen
School of Engineering and Technology, Central Queensland University, Australia
Chan-Tong Lam
Chan-Tong Lam
Macao Polytechnic University
Intelligent CommunicationsImage and Signal ProcessingAI in Communications