Can Large Language Models Act as Ensembler for Multi-GNNs?

πŸ“… 2024-10-22
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing graph neural networks (GNNs) exhibit limited semantic understanding on text-attributed graphs and poor cross-dataset generalization. To address these limitations, we propose an LLM-driven multi-GNN ensemble paradigmβ€”the first to employ a large language model (LLM) as a unified integrator for joint structural-semantic modeling: it aggregates complementary structural representations from diverse GNNs while enhancing deep semantic comprehension of textual node attributes. Methodologically, we introduce a two-stage representation alignment mechanism: (i) inter-GNN layer-wise node representation alignment, followed by (ii) latent-space alignment between GNNs and the LLM via LoRA adaptation. Evaluated on multiple text-attributed graph benchmarks, our approach consistently outperforms both single-GNN baselines and conventional ensemble methods, achieving simultaneous improvements in semantic understanding and structural reasoning capabilities.

Technology Category

Application Category

πŸ“ Abstract
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, GNNs lack the inherent semantic understanding capability of rich textual node attributes, limiting their effectiveness in applications. On the other hand, we empirically observe that for existing GNN models, no one can consistently outperforms others across diverse datasets. In this paper, we study whether LLMs can act as an ensembler for multi-GNNs and propose the LensGNN model. The model first aligns multiple GNNs, mapping the representations of different GNNs into the same space. Then, through LoRA fine-tuning, it aligns the space between the GNN and the LLM, injecting graph tokens and textual information into LLMs. This allows LensGNN to ensemble multiple GNNs and take advantage of the strengths of LLM, leading to a deeper understanding of both textual semantic information and graph structural information. The experimental results show that LensGNN outperforms existing models. This research advances text-attributed graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information. We provide our code and data here: https://anonymous.4open.science/r/EnsemGNN-E267/.
Problem

Research questions and friction points this paper is trying to address.

LLMs ensemble multiple GNNs for improved graph learning
Align GNN and LLM spaces to integrate semantic and structural information
Address inconsistent GNN performance across diverse datasets via ensembling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns multiple GNNs into unified representation space
Uses LoRA fine-tuning to bridge GNN and LLM spaces
Injects graph tokens and textual information into LLMs
πŸ”Ž Similar Papers
No similar papers found.
H
Hanqi Duan
East China Normal University
Y
Yao Cheng
East China Normal University
Jianxiang Yu
Jianxiang Yu
East China Normal University
Data miningLarge language models
X
Xiang Li
East China Normal University