š¤ AI Summary
To address the modality mismatch between graph-structured data and large language models (LLMs), which hinders joint modeling of graph topology and semantics, this paper proposes Dr.Eāa novel end-to-end modality alignment framework. Methodologically, it introduces: (1) a token-level LLM-GNN alignment mechanism that maps structural information into natural-language tokens interpretable by LLMs; (2) multi-view central-node modeling that integrates structural representations from multi-hop neighborhoods; and (3) a dual-residual vector-quantized variational autoencoder (Dr.VQ-VAE) for structure-aware modality-aligned embedding learning. The approach ensures interpretability, robustness, and visual explainability. It achieves state-of-the-art performance on standard graph classification and link prediction benchmarks, enabling efficient and robust cross-modal translation between graphs and language. The implementation is publicly available.
š Abstract
Significant efforts have been dedicated to integrating the powerful Large Language Models (LLMs) with diverse modalities, particularly focusing on the fusion of language, vision and audio data. However, the graph-structured data, which is inherently rich in structural and domain-specific knowledge, has not yet been gracefully adapted to LLMs. Existing methods either describe the graph with raw text, suffering the loss of graph structural information, or feed Graph Neural Network (GNN) embeddings into LLMs at the cost of losing explainable prompt semantics. To bridge this gap, we introduce an end-to-end modality-aligning framework for LLM-graph alignment: Dual-Residual Vector Quantized-Variational AutoEncoder, namely Dr.E. Our approach is purposefully designed to facilitate token-level alignment with LLMs, enabling an effective translation of the intrinsic `language' of graphs into comprehensible natural language. We also manage to enhance LLMs' more robust structural understanding of graphs by incorporating multiple views of the central nodes based on their surrounding nodes at various distances. Our experimental evaluations on standard graph tasks demonstrate competitive performance against other state-of-the-art (SOTA) approaches. Additionally, our framework ensures certain visual interpretability, efficiency, and robustness, marking the promising successful endeavor to achieve token-level alignment between LLMs and GNNs. Our code is available at: https://github.com/Timothy914/Dr.E.