π€ AI Summary
Existing graph neural networks (GNNs) exhibit limited semantic understanding on text-attributed graphs and poor cross-dataset generalization. To address these limitations, we propose an LLM-driven multi-GNN ensemble paradigmβthe first to employ a large language model (LLM) as a unified integrator for joint structural-semantic modeling: it aggregates complementary structural representations from diverse GNNs while enhancing deep semantic comprehension of textual node attributes. Methodologically, we introduce a two-stage representation alignment mechanism: (i) inter-GNN layer-wise node representation alignment, followed by (ii) latent-space alignment between GNNs and the LLM via LoRA adaptation. Evaluated on multiple text-attributed graph benchmarks, our approach consistently outperforms both single-GNN baselines and conventional ensemble methods, achieving simultaneous improvements in semantic understanding and structural reasoning capabilities.
π Abstract
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, GNNs lack the inherent semantic understanding capability of rich textual node attributes, limiting their effectiveness in applications. On the other hand, we empirically observe that for existing GNN models, no one can consistently outperforms others across diverse datasets. In this paper, we study whether LLMs can act as an ensembler for multi-GNNs and propose the LensGNN model. The model first aligns multiple GNNs, mapping the representations of different GNNs into the same space. Then, through LoRA fine-tuning, it aligns the space between the GNN and the LLM, injecting graph tokens and textual information into LLMs. This allows LensGNN to ensemble multiple GNNs and take advantage of the strengths of LLM, leading to a deeper understanding of both textual semantic information and graph structural information. The experimental results show that LensGNN outperforms existing models. This research advances text-attributed graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information. We provide our code and data here: https://anonymous.4open.science/r/EnsemGNN-E267/.