GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Foundation Models (GFMs) suffer from high performance variance and low adaptation efficiency in few-shot fine-tuning due to support-sample randomness and structural discrepancies between source and target graphs. Method: This paper proposes a robust fine-tuning framework grounded in generative graph vocabulary. It introduces a graphon-driven generative expert model enabling unified multi-domain pretraining and prompt-based efficient fine-tuning, augmented by ego-graph decoupling analysis, graph similarity task templates, MoE-CoE lightweight routing, and context-aware prompt enhancement. Contribution/Results: We first define transferable and interpretable generative graph vocabularies—generalizable graph semantic units rigorously derived via graph primitive disentanglement and graphon-theoretic validation. On few-shot node and graph classification benchmarks, our method significantly outperforms 15 state-of-the-art approaches, markedly improving fine-tuning stability, cross-domain adaptation efficiency, and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Inspired by the remarkable success of foundation models in language and vision, Graph Foundation Models (GFMs) hold significant promise for broad applicability across diverse graph tasks and domains. However, existing GFMs struggle with unstable few-shot fine-tuning, where both performance and adaptation efficiency exhibit significant fluctuations caused by the randomness in the support sample selection and structural discrepancies between the pre-trained and target graphs. How to fine-tune GFMs robustly and efficiently to enable trustworthy knowledge transfer across domains and tasks is the major challenge. In this paper, we propose GRAVER, a novel Generative gRAph VocabulariEs for Robust GFM fine-tuning framework that tackles the aforementioned instability via generative augmentations. Specifically, to identify transferable units, we analyze and extract key class-specific subgraph patterns by ego-graph disentanglement and validate their transferability both theoretically and empirically. To enable effective pre-training across diverse domains, we leverage a universal task template based on ego-graph similarity and construct graph vocabularies via graphon-based generative experts. To facilitate robust and efficient prompt fine-tuning, we grave the support samples with in-context vocabularies, where the lightweight MoE-CoE network attentively routes knowledge from source domains. Extensive experiments demonstrate the superiority of GRAVER over effectiveness, robustness, and efficiency on downstream few-shot node and graph classification tasks compared with 15 state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses unstable few-shot fine-tuning in Graph Foundation Models
Tackles structural discrepancies between pre-trained and target graphs
Enables robust knowledge transfer across domains and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts transferable subgraph patterns via ego-graph disentanglement
Constructs graph vocabularies using graphon-based generative experts
Enables robust fine-tuning with in-context vocabularies and MoE-CoE routing
🔎 Similar Papers
No similar papers found.