X-Node: Self-Explanation is All We Need

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GNN interpretability methods are predominantly post-hoc and global, failing to support node-level transparent decision-making required in high-stakes clinical applications. Method: We propose X-Node—the first self-explaining framework for graph neural networks—where each node dynamically generates a structured explanation vector during inference, encoding local topological cues; a lightweight Reasoner module then reconstructs embeddings, generates natural-language explanations, and guides message passing. The explanation vectors are fed back into the backbone GNN (e.g., GCN, GAT, GIN) to jointly enhance fidelity and interpretability. Contribution/Results: Integrated with large language models (e.g., Gemini) and text-injection mechanisms, X-Node is validated on MedMNIST and MorphoMNIST. It achieves classification performance on par with baselines while generating faithful, fine-grained, natural-language-readable node-level explanations—marking a departure from conventional post-hoc paradigms.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) have achieved state-of-the-art results in computer vision and medical image classification tasks by capturing structural dependencies across data instances. However, their decision-making remains largely opaque, limiting their trustworthiness in high-stakes clinical applications where interpretability is essential. Existing explainability techniques for GNNs are typically post-hoc and global, offering limited insight into individual node decisions or local reasoning. We introduce X-Node, a self-explaining GNN framework in which each node generates its own explanation as part of the prediction process. For every node, we construct a structured context vector encoding interpretable cues such as degree, centrality, clustering, feature saliency, and label agreement within its local topology. A lightweight Reasoner module maps this context into a compact explanation vector, which serves three purposes: (1) reconstructing the node's latent embedding via a decoder to enforce faithfulness, (2) generating a natural language explanation using a pre-trained LLM (e.g., Grok or Gemini), and (3) guiding the GNN itself via a "text-injection" mechanism that feeds explanations back into the message-passing pipeline. We evaluate X-Node on two graph datasets derived from MedMNIST and MorphoMNIST, integrating it with GCN, GAT, and GIN backbones. Our results show that X-Node maintains competitive classification accuracy while producing faithful, per-node explanations. Repository: https://github.com/basiralab/X-Node.
Problem

Research questions and friction points this paper is trying to address.

GNN decision-making lacks interpretability in clinical applications
Existing GNN explainability methods lack local node-level insights
Need for self-explaining GNNs with real-time node-specific explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-explaining GNN framework with node-generated explanations
Structured context vector encodes interpretable local topology cues
Explanation vector reconstructs embeddings and guides GNN
🔎 Similar Papers
No similar papers found.