🤖 AI Summary
Graph Foundation Models (GFMs) suffer from high performance variance and low adaptation efficiency in few-shot fine-tuning due to support-sample randomness and structural discrepancies between source and target graphs.
Method: This paper proposes a robust fine-tuning framework grounded in generative graph vocabulary. It introduces a graphon-driven generative expert model enabling unified multi-domain pretraining and prompt-based efficient fine-tuning, augmented by ego-graph decoupling analysis, graph similarity task templates, MoE-CoE lightweight routing, and context-aware prompt enhancement.
Contribution/Results: We first define transferable and interpretable generative graph vocabularies—generalizable graph semantic units rigorously derived via graph primitive disentanglement and graphon-theoretic validation. On few-shot node and graph classification benchmarks, our method significantly outperforms 15 state-of-the-art approaches, markedly improving fine-tuning stability, cross-domain adaptation efficiency, and cross-task generalization capability.
📝 Abstract
Inspired by the remarkable success of foundation models in language and vision, Graph Foundation Models (GFMs) hold significant promise for broad applicability across diverse graph tasks and domains. However, existing GFMs struggle with unstable few-shot fine-tuning, where both performance and adaptation efficiency exhibit significant fluctuations caused by the randomness in the support sample selection and structural discrepancies between the pre-trained and target graphs. How to fine-tune GFMs robustly and efficiently to enable trustworthy knowledge transfer across domains and tasks is the major challenge. In this paper, we propose GRAVER, a novel Generative gRAph VocabulariEs for Robust GFM fine-tuning framework that tackles the aforementioned instability via generative augmentations. Specifically, to identify transferable units, we analyze and extract key class-specific subgraph patterns by ego-graph disentanglement and validate their transferability both theoretically and empirically. To enable effective pre-training across diverse domains, we leverage a universal task template based on ego-graph similarity and construct graph vocabularies via graphon-based generative experts. To facilitate robust and efficient prompt fine-tuning, we grave the support samples with in-context vocabularies, where the lightweight MoE-CoE network attentively routes knowledge from source domains. Extensive experiments demonstrate the superiority of GRAVER over effectiveness, robustness, and efficiency on downstream few-shot node and graph classification tasks compared with 15 state-of-the-art baselines.