Graph Defense Diffusion Model

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) are vulnerable to diverse adversarial attacks—e.g., edge perturbations and node feature corruption—while existing defenses suffer from poor generalizability, reliance on strong heuristic priors, and trade-offs between robustness and structural fidelity. To address these limitations, we propose a diffusion-based graph purification framework. Our method introduces a novel dual-module architecture: a graph-structure-driven refiner that preserves topological integrity, and a node-feature-constrained regularizer that enforces feature consistency. Crucially, we design an adaptive denoising strategy that eliminates dependence on hand-crafted priors. By iteratively applying forward noising and reverse denoising, the framework jointly reconstructs both graph topology and node features. Extensive experiments on three real-world datasets demonstrate state-of-the-art performance, achieving superior robustness against multiple attack types and high-fidelity purification accuracy.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) demonstrate significant potential in various applications but remain highly vulnerable to adversarial attacks, which can greatly degrade their performance. Existing graph purification methods attempt to address this issue by filtering attacked graphs; however, they struggle to effectively defend against multiple types of adversarial attacks simultaneously due to their limited flexibility, and they lack comprehensive modeling of graph data due to their heavy reliance on heuristic prior knowledge. To overcome these challenges, we propose a more versatile approach for defending against adversarial attacks on graphs. In this work, we introduce the Graph Defense Diffusion Model (GDDM), a flexible purification method that leverages the denoising and modeling capabilities of diffusion models. The iterative nature of diffusion models aligns well with the stepwise process of adversarial attacks, making them particularly suitable for defense. By iteratively adding and removing noise, GDDM effectively purifies attacked graphs, restoring their original structure and features. Our GDDM consists of two key components: (1) Graph Structure-Driven Refiner, which preserves the basic fidelity of the graph during the denoising process, and ensures that the generated graph remains consistent with the original scope; and (2) Node Feature-Constrained Regularizer, which removes residual impurities from the denoised graph, further enhances the purification effect. Additionally, we design tailored denoising strategies to handle different types of adversarial attacks, improving the model's adaptability to various attack scenarios. Extensive experiments conducted on three real-world datasets demonstrate that GDDM outperforms state-of-the-art methods in defending against a wide range of adversarial attacks, showcasing its robustness and effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Attack Defense
Prior Knowledge Dependence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Defense Diffusion Model
Adversarial Attacks on GNNs
Denoising and Purification Process
🔎 Similar Papers
No similar papers found.
X
Xin He
Jilin University, Chang Chun, China
W
Wenqi Fan
The Hong Kong Polytechnic University, Hong Kong, China
Yili Wang
Yili Wang
Jilin University
Graph Neural Networks
Chengyi Liu
Chengyi Liu
PhD of The Hong Kong Polytechnic University
Recommender SystemDiffusion ModelGNN
Rui Miao
Rui Miao
Meta
NetworkingNetworked SystemsDistributed Systems
X
Xin Juan
Jilin University, Chang Chun, China
X
Xin Wang
Jilin University, Chang Chun, China