🤖 AI Summary
QGNNs suffer from poor interpretability due to measurement stochasticity and the combinatorial complexity of graph-structured data. To address this, we propose a model-agnostic post-hoc explanation framework: it generates structure-preserving graph perturbations to construct local surrogate models; ranks node/edge importance via attribution distribution aggregation and dispersion quantification; introduces, for the first time, an uncertainty-aware mechanism—leveraging the DKW inequality to provide distribution-free theoretical guarantees on surrogate ensemble size under finite samples; and improves robustness against nonlinear surrogates and perturbation design. Experiments on synthetic graphs demonstrate high accuracy and stability, and ablation studies confirm that nonlinear surrogates substantially enhance explanation quality. This work establishes the first general-purpose QGNN interpretability solution that simultaneously ensures statistical rigor and structural fidelity, enabling principled extension to real-world data and large-scale quantum hardware.
📝 Abstract
Quantum graph neural networks offer a powerful paradigm for learning on graph-structured data, yet their explainability is complicated by measurement-induced stochasticity and the combinatorial nature of graph structure. In this paper, we introduce QuantumGraphLIME (QGraphLIME), a model-agnostic, post-hoc framework that treats model explanations as distributions over local surrogates fit on structure-preserving perturbations of a graph. By aggregating surrogate attributions together with their dispersion, QGraphLIME yields uncertainty-aware node and edge importance rankings for quantum graph models. The framework further provides a distribution-free, finite-sample guarantee on the size of the surrogate ensemble: a Dvoretzky-Kiefer-Wolfowitz bound ensures uniform approximation of the induced distribution of a binary class probability at target accuracy and confidence under standard independence assumptions. Empirical studies on controlled synthetic graphs with known ground truth demonstrate accurate and stable explanations, with ablations showing clear benefits of nonlinear surrogate modeling and highlighting sensitivity to perturbation design. Collectively, these results establish a principled, uncertainty-aware, and structure-sensitive approach to explaining quantum graph neural networks, and lay the groundwork for scaling to broader architectures and real-world datasets, as quantum resources mature. Code is available at https://github.com/smlab-niser/qglime.