🤖 AI Summary
Graph neural networks (GNNs) suffer from poor interpretability due to their “black-box” nature, hindering adoption in safety-critical domains. While Shapley-value-based explanation methods offer theoretical rigor, exact computation incurs exponential complexity—O(2ⁿ) or O(n!)—rendering them infeasible for real-world graphs. To address this, we propose QGShap, the first framework that integrates quantum amplitude amplification into exact Shapley value computation for GNN explanations. QGShap achieves provable quantum quadratic speedup while preserving full fidelity—uniquely unifying quantum acceleration and exactness. By co-designing quantum algorithms, Shapley theory, and GNN interpretability modeling, QGShap demonstrates empirically verified quadratic scaling on synthetic graphs. It consistently outperforms state-of-the-art approximation methods across explanation accuracy, fidelity, and stability, generating attribution results that are both logically consistent with GNN inference and rigorously trustworthy.
📝 Abstract
Graph Neural Networks (GNNs) have become indispensable in critical domains such as drug discovery, social network analysis, and recommendation systems, yet their black-box nature hinders deployment in scenarios requiring transparency and accountability. While Shapley value-based methods offer mathematically principled explanations by quantifying each component's contribution to predictions, computing exact values requires evaluating $2^n$ coalitions (or aggregating over $n!$ permutations), which is intractable for real-world graphs. Existing approximation strategies sacrifice either fidelity or efficiency, limiting their practical utility. We introduce QGShap, a quantum computing approach that leverages amplitude amplification to achieve quadratic speedups in coalition evaluation while maintaining exact Shapley computation. Unlike classical sampling or surrogate methods, our approach provides fully faithful explanations without approximation trade-offs for tractable graph sizes. We conduct empirical evaluations on synthetic graph datasets, demonstrating that QGShap achieves consistently high fidelity and explanation accuracy, matching or exceeding the performance of classical methods across all evaluation metrics. These results collectively demonstrate that QGShap not only preserves exact Shapley faithfulness but also delivers interpretable, stable, and structurally consistent explanations that align with the underlying graph reasoning of GNNs. The implementation of QGShap is available at https://github.com/smlab-niser/qgshap.