Backdoor Attack on Vertical Federated Graph Neural Network Learning

📅 2024-10-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals a previously overlooked stealthy backdoor vulnerability in Vertical Federated Graph Neural Networks (VFGNNs) under label-unavailable settings. To exploit this, we propose BVG—the first label-free backdoor attack tailored for VFGNNs—leveraging graph structure perturbation and a multi-hop neighborhood triggering mechanism to achieve high stealthiness with only four target-class nodes. We further introduce a novel model-parameter-level backdoor preservation strategy that ensures robust retention of the backdoor during federated aggregation. BVG is compatible with mainstream GNN architectures and vertical federated protocols. Extensive evaluations across three benchmark datasets and three GNN models demonstrate near-100% attack success rates, <1% degradation in primary task accuracy, and strong resilience against diverse defense mechanisms—thereby invalidating prevailing security assumptions in VFGNNs.

Technology Category

Application Category

📝 Abstract
Federated Graph Neural Network (FedGNN) integrate federated learning (FL) with graph neural networks (GNNs) to enable privacy-preserving training on distributed graph data. Vertical Federated Graph Neural Network (VFGNN), a key branch of FedGNN, handles scenarios where data features and labels are distributed among participants. Despite the robust privacy-preserving design of VFGNN, we have found that it still faces the risk of backdoor attacks, even in situations where labels are inaccessible. This paper proposes BVG, a novel backdoor attack method that leverages multi-hop triggers and backdoor retention, requiring only four target-class nodes to execute effective attacks. Experimental results demonstrate that BVG achieves nearly 100% attack success rates across three commonly used datasets and three GNN models, with minimal impact on the main task accuracy. We also evaluated various defense methods, and the BVG method maintained high attack effectiveness even under existing defenses. This finding highlights the need for advanced defense mechanisms to counter sophisticated backdoor attacks in practical VFGNN applications.
Problem

Research questions and friction points this paper is trying to address.

Vertical Federated Graph Neural Networks
Privacy Protection
Backdoor Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stealthy Backdoor Attack
Vertical Federated Graph Neural Networks
BVG Methodology
🔎 Similar Papers
No similar papers found.
J
Jirui Yang
Fudan University, China
P
Peng Chen
Nanjing University of Information Science and Technology, China
Z
Zhihui Lu
Fudan University, China
R
Ruijun Deng
Fudan University, China
Q
Qiang Duan
Pennsylvania State University, USA
Jianping Zeng
Jianping Zeng
Assistant Professor of Computer Science and Engineering at Arizona State University
Computer ArchitectureCompilers