π€ AI Summary
This work proposes a parameter-efficient framework for node classification in textual attributed graphs, addressing the dual challenges of inefficient fusion of structural and semantic information and the high computational cost of full-parameter fine-tuning of large language models (LLMs). By integrating graph neural networks with LLMs and leveraging efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA), the method injects graph structural information into the LLM while updating only 0.24% of the modelβs total parameters. Experimental results on three real-world textual attributed graph datasets demonstrate that the proposed approach achieves classification performance comparable to or better than state-of-the-art models, while substantially reducing computational overhead, thereby offering both high efficiency and strong scalability.
π Abstract
The rapid rise of large language models (LLMs) and their ability to capture semantic relationships has led to their adoption in a wide range of applications. Text-attributed graphs (TAGs) are a notable example where LLMs can be combined with Graph Neural Networks to improve the performance of node classification. In TAGs, each node is associated with textual content and such graphs are commonly seen in various domains such as social networks, citation graphs, recommendation systems, etc. Effectively learning from TAGs would enable better representations of both structural and textual representations of the graph and improve decision-making in relevant domains. We present GaLoRA, a parameter-efficient framework that integrates structural information into LLMs. GaLoRA demonstrates competitive performance on node classification tasks with TAGs, performing on par with state-of-the-art models with just 0.24% of the parameter count required by full LLM fine-tuning. We experiment with three real-world datasets to showcase GaLoRA's effectiveness in combining structural and semantical information on TAGs.