Shapley-Value-Based Graph Sparsification for GNN Inference

๐Ÿ“… 2025-07-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the low inference efficiency and poor interpretability of Graph Neural Networks (GNNs), this paper proposes a Shapley-value-based graph sparsification method. Unlike conventional non-negative edge importance measures, our approach introduces, for the first time, a signed Shapley edge contribution assessment that explicitly models both positive (facilitative) and negative (adversarial) edge effectsโ€”enabling precise identification and pruning of redundant or harmful edges. By combining subgraph sampling with efficient marginal contribution estimation, we achieve scalable, global edge importance ranking to guide structured graph pruning. Experiments across multiple benchmark datasets demonstrate that our method reduces edge counts by 30โ€“50% on average while preserving GNN predictive accuracy (prediction error increase <1.2%), accelerating inference by 1.8โ€“2.4ร—, and enhancing model interpretability. The framework is theoretically grounded in cooperative game theory and empirically validated for practical effectiveness.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph sparsification is a key technique for improving inference efficiency in Graph Neural Networks by removing edges with minimal impact on predictions. GNN explainability methods generate local importance scores, which can be aggregated into global scores for graph sparsification. However, many explainability methods produce only non-negative scores, limiting their applicability for sparsification. In contrast, Shapley value based methods assign both positive and negative contributions to node predictions, offering a theoretically robust and fair allocation of importance by evaluating many subsets of graphs. Unlike gradient-based or perturbation-based explainers, Shapley values enable better pruning strategies that preserve influential edges while removing misleading or adversarial connections. Our approach shows that Shapley value-based graph sparsification maintains predictive performance while significantly reducing graph complexity, enhancing both interpretability and efficiency in GNN inference.
Problem

Research questions and friction points this paper is trying to address.

Improving GNN inference efficiency via graph sparsification
Overcoming non-negative score limitations in sparsification methods
Enhancing pruning strategies using Shapley value-based importance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Shapley values for graph sparsification
Aggregates local importance into global scores
Preserves influential edges, removes misleading ones
๐Ÿ”Ž Similar Papers
S
Selahattin Akkas
Department of Intelligent Systems Engineering, Indiana University Bloomington
Ariful Azad
Ariful Azad
Texas A&M University
Graph algorithmsHigh performance computingBioinformatics