Hide and Find: A Distributed Adversarial Attack on Federated Graph Learning

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes FedShift, a novel two-stage “hide-and-seek” distributed adversarial attack against federated graph learning. Unlike existing methods that suffer from low success rates, high computational overhead, and susceptibility to detection by defense mechanisms, FedShift first injects learnable, stealthy shifters before training to subtly bias poisoned graph representations toward—but not beyond—the decision boundary of the target class. After global model convergence, it leverages global model information to efficiently generate adversarial perturbations starting from these shifters and aggregates perturbations from multiple malicious clients to launch a coordinated attack. FedShift achieves unprecedented levels of stealth, efficiency, and evasion capability, significantly outperforming state-of-the-art attacks across six large-scale datasets: it dramatically increases attack success rates, reduces computational overhead by over 90%, and successfully bypasses three mainstream robust federated learning defenses.

Technology Category

Application Category

📝 Abstract
Federated Graph Learning (FedGL) is vulnerable to malicious attacks, yet developing a truly effective and stealthy attack method remains a significant challenge. Existing attack methods suffer from low attack success rates, high computational costs, and are easily identified and smoothed by defense algorithms. To address these challenges, we propose \textbf{FedShift}, a novel two-stage"Hide and Find"distributed adversarial attack. In the first stage, before FedGL begins, we inject a learnable and hidden"shifter"into part of the training data, which subtly pushes poisoned graph representations toward a target class's decision boundary without crossing it, ensuring attack stealthiness during training. In the second stage, after FedGL is complete, we leverage the global model information and use the hidden shifter as an optimization starting point to efficiently find the adversarial perturbations. During the final attack, we aggregate these perturbations from multiple malicious clients to form the final effective adversarial sample and trigger the attack. Extensive experiments on six large-scale datasets demonstrate that our method achieves the highest attack effectiveness compared to existing advanced attack methods. In particular, our attack can effectively evade 3 mainstream robust federated learning defense algorithms and converges with a time cost reduction of over 90\%, highlighting its exceptional stealthiness, robustness, and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Federated Graph Learning
Adversarial Attack
Attack Stealthiness
Distributed Attack
Defense Evasion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Graph Learning
Adversarial Attack
Stealthy Poisoning
Distributed Attack
Decision Boundary Manipulation
🔎 Similar Papers
No similar papers found.
Jinshan Liu
Jinshan Liu
Ph.D. Virginia Tech
SecurityAutonomous DrivingIOTDeep Learning
K
Ken Li
School of Computer Science and Technology, Xi’an Jiaotong University; Ministry of Education Key Laboratory of Intelligent Networks and Network Security
J
Jiazhe Wei
School of Computer Science and Technology, Xi’an Jiaotong University; Shaanxi Province Key Laboratory of Big Data Knowledge Engineering
Bin Shi
Bin Shi
Xi'an Jiaotong University
VirtualizationData Mining
Bo Dong
Bo Dong
Xi'an Jiaotong University
Cloud computinge-LearningBig data