🤖 AI Summary
This work addresses the lack of reliable interpretability in continuous-time dynamic graph models for high-stakes domains such as healthcare and transportation. We propose the first instance-level differentiable explanation framework tailored to dynamic spatiotemporal graphs. Methodologically, it integrates gradient-guided local search, continuous relaxation optimization, subgraph importance modeling, and spatiotemporal attention to jointly optimize for both fidelity and conciseness. Our key contribution lies in explicitly incorporating dynamic graph structural evolution and temporal dependencies into the explanation generation process, enabling end-to-end differentiable subgraph search. Evaluated on multiple benchmark datasets, our approach improves explanation fidelity (measured by AOPC) by 12.7% and reduces average explanation subgraph size by 38%, significantly enhancing model trustworthiness and human readability.
📝 Abstract
Recent improvements in the expressive power of spatio-temporal models have led to performance gains in many real-world applications, such as traffic forecasting and social network modelling. However, understanding the predictions from a model is crucial to ensure reliability and trustworthiness, particularly for high-risk applications, such as healthcare and transport. Few existing methods are able to generate explanations for models trained on continuous-time dynamic graph data and, of these, the computational complexity and lack of suitable explanation objectives pose challenges. In this paper, we propose $ extbf{S}$patio-$ extbf{T}$emporal E$ extbf{X}$planation $ extbf{Search}$ (STX-Search), a novel method for generating instance-level explanations that is applicable to static and dynamic temporal graph structures. We introduce a novel search strategy and objective function, to find explanations that are highly faithful and interpretable. When compared with existing methods, STX-Search produces explanations of higher fidelity whilst optimising explanation size to maintain interpretability.