🤖 AI Summary
Insufficient interpretability of Graph Neural Networks (GNNs) persists, particularly regarding natural-language explanations for semantic feature integration. This paper introduces the first end-to-end GNN explanation generation framework, integrating pseudo-label guidance, expert-in-the-loop iterative refinement, and saliency-aware language modeling. Specifically, node- and edge-level saliency maps are first generated via Gradient-weighted Class Activation Mapping (Grad-CAM); these guide fine-tuning of a large language model (LLM) through pseudo-label supervision. Domain expert feedback is then incorporated in a closed-loop optimization to enhance explanation quality. The resulting explanations achieve superior faithfulness, conciseness, and alignment with human preferences. Extensive evaluation—including automated metrics (BLEU, ROUGE, Faithfulness) and human assessments (comprehensibility, plausibility, relevance)—demonstrates statistically significant improvements over state-of-the-art baselines.
📝 Abstract
Graph representation learning has garnered significant attention due to its broad applications in various domains, such as recommendation systems and social network analysis. Despite advancements in graph learning methods, challenges still remain in explainability when graphs are associated with semantic features. In this paper, we present GraphNarrator, the first method designed to generate natural language explanations for Graph Neural Networks. GraphNarrator employs a generative language model that maps input-output pairs to explanations reflecting the model's decision-making process. To address the lack of ground truth explanations to train the model, we propose first generating pseudo-labels that capture the model's decisions from saliency-based explanations, then using Expert Iteration to iteratively train the pseudo-label generator based on training objectives on explanation quality. The high-quality pseudo-labels are finally utilized to train an end-to-end explanation generator model. Extensive experiments are conducted to demonstrate the effectiveness of GraphNarrator in producing faithful, concise, and human-preferred natural language explanations.