Towards Comprehensive and Prerequisite-Free Explainer for Graph Neural Networks

πŸ“… 2025-05-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing GNN explanation methods (e.g., XGNN) face two key bottlenecks: (1) inability to characterize the model’s global decision logic across diverse, multi-distribution sample spaces, and (2) heavy reliance on white-box GNN access and prior knowledge of edge attributes. This paper proposes the first global explanation framework that requires neither model access nor edge-attribute assumptions. It introduces environment-aware subgraph sampling to automatically partition the data space into multiple distributional environments, followed by a cross-environment, distribution-adaptive mechanism for inferring decision logic. Our method is the first to fully model the complete inference path of black-box GNNs, capturing nearly 100% of their decision logic. It achieves significantly higher fidelity than state-of-the-art methods while maintaining comparable inference efficiency and demonstrates superior robustness on real-world heterogeneous graphs.

Technology Category

Application Category

πŸ“ Abstract
To enhance the reliability and credibility of graph neural networks (GNNs) and improve the transparency of their decision logic, a new field of explainability of GNNs (XGNN) has emerged. However, two major limitations severely degrade the performance and hinder the generalizability of existing XGNN methods: they (a) fail to capture the complete decision logic of GNNs across diverse distributions in the entire dataset's sample space, and (b) impose strict prerequisites on edge properties and GNN internal accessibility. To address these limitations, we propose OPEN, a novel c extbf{O}mprehensive and extbf{P}rerequisite-free extbf{E}xplainer for G extbf{N}Ns. OPEN, as the first work in the literature, can infer and partition the entire dataset's sample space into multiple environments, each containing graphs that follow a distinct distribution. OPEN further learns the decision logic of GNNs across different distributions by sampling subgraphs from each environment and analyzing their predictions, thus eliminating the need for strict prerequisites. Experimental results demonstrate that OPEN captures nearly complete decision logic of GNNs, outperforms state-of-the-art methods in fidelity while maintaining similar efficiency, and enhances robustness in real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhancing GNN transparency by explaining decision logic comprehensively
Overcoming limitations in capturing diverse data distributions in XGNN
Eliminating strict prerequisites for edge properties and GNN accessibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive prerequisite-free GNN explainer OPEN
Infers and partitions dataset into diverse environments
Learns GNN decision logic across different distributions
πŸ”Ž Similar Papers
No similar papers found.