🤖 AI Summary
Graph Neural Network (GNN) hidden representations are inherently opaque, hindering human interpretability and behavioral understanding.
Method: This paper proposes the Graph Language Network (GLN), the first framework to directly model GNN hidden states as natural language sequences—enabling human-readable representation interpretation. GLN integrates message passing, graph attention, and initial residual connections, and leverages structured prompt engineering with large language models (LLMs) to achieve end-to-end mapping from latent graph representations to interpretable text.
Contribution/Results: GLN supports zero-shot transfer and significantly outperforms existing LLM-based baselines on node classification and link prediction. Crucially, it enables intuitive, layer-wise attribution analysis of representation evolution across GNN depths and quantifies the functional impact of advanced architectural components (e.g., attention mechanisms, residual connections)—establishing a novel, explainability-oriented paradigm for understanding GNN behavior.
📝 Abstract
While graph neural networks (GNNs) have shown remarkable performance across diverse graph-related tasks, their high-dimensional hidden representations render them black boxes. In this work, we propose Graph Lingual Network (GLN), a GNN built on large language models (LLMs), with hidden representations in the form of human-readable text. Through careful prompt design, GLN incorporates not only the message passing module of GNNs but also advanced GNN techniques, including graph attention and initial residual connection. The comprehensibility of GLN's hidden representations enables an intuitive analysis of how node representations change (1) across layers and (2) under advanced GNN techniques, shedding light on the inner workings of GNNs. Furthermore, we demonstrate that GLN achieves strong zero-shot performance on node classification and link prediction, outperforming existing LLM-based baseline methods.