🤖 AI Summary
To address the lack of interpretability in temporal graph regression models, this paper proposes the first interpretable and traceable temporal graph neural network framework by integrating the Information Bottleneck (IB) principle with prototype learning. Methodologically: (1) it derives a novel mutual information bound tailored to graph-structured data, enabling joint optimization of feature compression and discriminability; (2) it introduces an unsupervised auxiliary classification head to enhance prototype concept disentanglement and improve semantic interpretability of the bottleneck layer; and (3) it unifies multi-task learning with prototype-guided training for temporal GNNs. Experiments on real-world traffic datasets demonstrate that the method significantly outperforms existing baselines in both prediction accuracy and interpretability metrics—including prototype relevance and attribution consistency—achieving a principled balance between high predictive performance and model transparency.
📝 Abstract
Deep neural networks (DNNs) have demonstrated remarkable performance across various domains, yet their application to temporal graph regression tasks faces significant challenges regarding interpretability. This critical issue, rooted in the inherent complexity of both DNNs and underlying spatio-temporal patterns in the graph, calls for innovative solutions. While interpretability concerns in Graph Neural Networks (GNNs) mirror those of DNNs, to the best of our knowledge, no notable work has addressed the interpretability of temporal GNNs using a combination of Information Bottleneck (IB) principles and prototype-based methods. Our research introduces a novel approach that uniquely integrates these techniques to enhance the interpretability of temporal graph regression models. The key contributions of our work are threefold: We introduce the underline{G}raph underline{IN}terpretability in underline{T}emporal underline{R}egression task using underline{I}nformation bottleneck and underline{P}rototype (GINTRIP) framework, the first combined application of IB and prototype-based methods for interpretable temporal graph tasks. We derive a novel theoretical bound on mutual information (MI), extending the applicability of IB principles to graph regression tasks. We incorporate an unsupervised auxiliary classification head, fostering multi-task learning and diverse concept representation, which enhances the model bottleneck's interpretability. Our model is evaluated on real-world traffic datasets, outperforming existing methods in both forecasting accuracy and interpretability-related metrics.