Beyond One-Size-Fits-All: Adaptive Subgraph Denoising for Zero-Shot Graph Learning with Large Language Models

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in zero-shot graph learning where generic subgraph extraction strategies often introduce structural noise, thereby degrading the reasoning performance of large language models. To overcome this limitation, the authors propose GraphSSR, a novel framework that implements a task-adaptive “Sampling–Selection–Reasoning” (SSR) pipeline for subgraph extraction and denoising. GraphSSR introduces a dynamic denoising mechanism that adapts to task-specific contexts and integrates SSR-SFT (supervised fine-tuning) with a two-stage reinforcement learning strategy—SSR-RL—that incorporates both fidelity enhancement and denoising augmentation. This approach transcends the constraints of conventional static subgraph extraction methods. Experimental results demonstrate that GraphSSR significantly improves zero-shot graph reasoning accuracy, enabling models to achieve high-precision predictions based on cleaner, more concise subgraphs.

Technology Category

Application Category

📝 Abstract
Graph-based tasks in the zero-shot setting remain a significant challenge due to data scarcity and the inability of traditional Graph Neural Networks (GNNs) to generalize to unseen domains or label spaces. While recent advancements have transitioned toward leveraging Large Language Models (LLMs) as predictors to enhance GNNs, these methods often suffer from cross-modal alignment issues. A recent paradigm (i.e., Graph-R1) overcomes the aforementioned architectural dependencies by adopting a purely text-based format and utilizing LLM-based graph reasoning, showing improved zero-shot generalization. However, it employs a task-agnostic, one-size-fits-all subgraph extraction strategy, which inevitably introduces significant structural noise--irrelevant neighbors and edges--that distorts the LLMs' receptive field and leads to suboptimal predictions. To address this limitation, we introduce GraphSSR, a novel framework designed for adaptive subgraph extraction and denoising in zero-shot LLM-based graph reasoning. Specifically, we propose the SSR pipeline, which dynamically tailors subgraph extraction to specific contexts through a "Sample-Select-Reason" process, enabling the model to autonomously filter out task-irrelevant neighbors and overcome the one-size-fits-all issue. To internalize this capability, we develop SSR-SFT, a data synthesis strategy that generates high-quality SSR-style graph reasoning traces for supervised fine-tuning of LLMs. Furthermore, we propose SSR-RL, a two-stage reinforcement learning framework that explicitly regulates sampling and selection operations within the proposed SSR pipeline designed for adaptive subgraph denoising. By incorporating Authenticity-Reinforced and Denoising-Reinforced RL, we guide the model to achieve accurate predictions using parsimonious, denoised subgraphs for reasoning.
Problem

Research questions and friction points this paper is trying to address.

zero-shot graph learning
subgraph denoising
large language models
structural noise
task-agnostic subgraph extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive subgraph extraction
zero-shot graph learning
LLM-based graph reasoning
subgraph denoising
reinforcement learning
🔎 Similar Papers
No similar papers found.