🤖 AI Summary
Existing GNN explanation methods typically yield technical subgraphs or feature importance scores, which are difficult for non-experts to interpret, thereby diminishing their explanatory value. This paper introduces the first narrative-based explainability paradigm for GNNs, leveraging large language models (LLMs) to automatically translate structural subgraphs and feature attributions into natural-language explanations—without modifying the original GNN or restricting the underlying explainer. The approach preserves technical rigor required by domain experts while substantially improving comprehensibility, persuasiveness, shareability, and trustworthiness of explanations. Experiments on real-world graph datasets and a user study (N=200) demonstrate that 95% of participants rated the explanations as “highly valuable,” and the method significantly outperforms baseline XAI approaches across all four key evaluation dimensions: clarity, faithfulness, utility, and credibility.
📝 Abstract
Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose challenges in interpretability. Existing GNN explanation methods usually yield technical outputs, such as subgraphs and feature importance scores, that are difficult for non-data scientists to understand and thereby violate the purpose of explanations. Motivated by recent Explainable AI (XAI) research, we propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions. GraphXAIN is a model- and explainer-agnostic method that uses Large Language Models (LLMs) to translate explanatory subgraphs and feature importance scores into coherent, story-like explanations of GNN decision-making processes. Evaluations on real-world datasets demonstrate GraphXAIN's ability to improve graph explanations. A survey of machine learning researchers and practitioners reveals that GraphXAIN enhances four explainability dimensions: understandability, satisfaction, convincingness, and suitability for communicating model predictions. When combined with another graph explainer method, GraphXAIN further improves trustworthiness, insightfulness, confidence, and usability. Notably, 95% of participants found GraphXAIN to be a valuable addition to the GNN explanation method. By incorporating natural language narratives, our approach serves both graph practitioners and non-expert users by providing clearer and more effective explanations.