๐ค AI Summary
To address the low inference efficiency and poor interpretability of Graph Neural Networks (GNNs), this paper proposes a Shapley-value-based graph sparsification method. Unlike conventional non-negative edge importance measures, our approach introduces, for the first time, a signed Shapley edge contribution assessment that explicitly models both positive (facilitative) and negative (adversarial) edge effectsโenabling precise identification and pruning of redundant or harmful edges. By combining subgraph sampling with efficient marginal contribution estimation, we achieve scalable, global edge importance ranking to guide structured graph pruning. Experiments across multiple benchmark datasets demonstrate that our method reduces edge counts by 30โ50% on average while preserving GNN predictive accuracy (prediction error increase <1.2%), accelerating inference by 1.8โ2.4ร, and enhancing model interpretability. The framework is theoretically grounded in cooperative game theory and empirically validated for practical effectiveness.
๐ Abstract
Graph sparsification is a key technique for improving inference efficiency in Graph Neural Networks by removing edges with minimal impact on predictions. GNN explainability methods generate local importance scores, which can be aggregated into global scores for graph sparsification. However, many explainability methods produce only non-negative scores, limiting their applicability for sparsification. In contrast, Shapley value based methods assign both positive and negative contributions to node predictions, offering a theoretically robust and fair allocation of importance by evaluating many subsets of graphs. Unlike gradient-based or perturbation-based explainers, Shapley values enable better pruning strategies that preserve influential edges while removing misleading or adversarial connections. Our approach shows that Shapley value-based graph sparsification maintains predictive performance while significantly reducing graph complexity, enhancing both interpretability and efficiency in GNN inference.