đ¤ AI Summary
Real-world graph data often exhibit multiple defectsâincluding noise, missing values, and inconsistenciesâthat severely degrade the performance of Graph Neural Networks (GNNs). Prior work predominantly addresses individual defects in isolation, lacking systematic robustness evaluation of both conventional GNNs and emerging LLM-on-graph approaches under composite defects. This paper presents the first empirical comparative study, revealing that LLM augmentation is not universally superior and exhibits notable fragility under structuralâtextual misalignment. To address this, we propose Iterative Retrieval-Augmented Contrastive Refinement (IRCR), a novel framework integrating retrieval-augmented generation (RAG), graph contrastive learning, dynamic feature enhancement, and class-consistency regularizationâtransforming static feature injection into an iterative retrieveâgenerateâcontrast optimization process. Extensive experiments on multiple text-attributed graph benchmarks demonstrate that IRCR achieves an average performance gain of 82.43%, significantly outperforming both traditional and LLM-enhanced baselines.
đ Abstract
Graph Neural Networks (GNNs) are widely adopted in Web-related applications, serving as a core technique for learning from graph-structured data, such as text-attributed graphs. Yet in real-world scenarios, such graphs exhibit deficiencies that substantially undermine GNN performance. While prior GNN-based augmentation studies have explored robustness against individual imperfections, a systematic understanding of how graph-native and Large Language Models (LLMs) enhanced methods behave under compound deficiencies is still missing. Specifically, there has been no comprehensive investigation comparing conventional approaches and recent LLM-on-graph frameworks, leaving their merits unclear. To fill this gap, we conduct the first empirical study that benchmarks these two lines of methods across diverse graph deficiencies, revealing overlooked vulnerabilities and challenging the assumption that LLM augmentation is consistently superior. Building on empirical findings, we propose Robust Graph Learning via Retrieval-Augmented Contrastive Refinement (RoGRAD) framework. Unlike prior one-shot LLM-as-Enhancer designs, RoGRAD is the first iterative paradigm that leverages Retrieval-Augmented Generation (RAG) to inject retrieval-grounded augmentations by supplying class-consistent, diverse augmentations and enforcing discriminative representations through iterative graph contrastive learning. It transforms LLM augmentation for graphs from static signal injection into dynamic refinement. Extensive experiments demonstrate RoGRAD's superiority over both conventional GNN- and LLM-enhanced baselines, achieving up to 82.43% average improvement.