🤖 AI Summary
Graph Foundation Models (GFMs) suffer from model degradation and representation collapse during multi-domain transfer, leading to low-quality reconstruction supervision signals. To address these challenges, we propose MoT—a novel framework built upon the Graph VQ-MAE architecture. MoT introduces (1) an edge-semantic fusion module and a domain-aware hybrid codebook routing mechanism to jointly model graph structural and domain-specific information; and (2) a dual regularization strategy to alleviate the information bottleneck and enhance semantic separability of learned embeddings. It constructs a discrete embedding space and incorporates gradient-enhanced supervision into the decoder. Extensive experiments across six domains and 22 datasets demonstrate that MoT consistently outperforms state-of-the-art methods in supervised, few-shot, and zero-shot transfer tasks—achieving superior generalizability and scalability.
📝 Abstract
Graph foundation models, inspired by the success of LLMs, are designed to learn the optimal embedding from multi-domain TAGs for the downstream cross-task generalization capability. During our investigation, graph VQ-MAE stands out among the increasingly diverse landscape of GFM architectures. This is attributed to its ability to jointly encode topology and textual attributes from multiple domains into discrete embedding spaces with clear semantic boundaries. Despite its potential, domain generalization conflicts cause imperceptible pitfalls. In this paper, we instantiate two of them, and they are just like two sides of the same GFM optimization coin - Side 1 Model Degradation: The encoder and codebook fail to capture the diversity of inputs; Side 2 Representation Collapse: The hidden embedding and codebook vector fail to preserve semantic separability due to constraints from narrow representation subspaces. These two pitfalls (sides) collectively impair the decoder and generate the low-quality reconstructed supervision, causing the GFM optimization dilemma during pre-training (coin). Through empirical investigation, we attribute the above challenges to Information Bottleneck and Regularization Deficit. To address them, we propose MoT (Mixture-of-Tinkers) - (1) Information Tinker for Two Pitfalls, which utilizes an edge-wise semantic fusion strategy and a mixture-of-codebooks with domain-aware routing to improve information capacity. (2) Regularization Tinker for Optimization Coin, which utilizes two additional regularizations to further improve gradient supervision in our proposed Information Tinker. Notably, as a flexible architecture, MoT adheres to the scaling laws of GFM, offering a controllable model scale. Compared to SOTA baselines, experiments on 22 datasets across 6 domains demonstrate that MoT achieves significant improvements in supervised, few-shot, and zero-shot scenarios.