๐ค AI Summary
To address performance degradation caused by statistical heterogeneity across clients in subgraph federated learning, this paper proposes FedSheafHN. Methodologically, it introduces (1) a server-side collaborative graph with a layer-wise sheaf diffusion mechanism to fuse graph-level embeddings and uniformly enhance client representations, and (2) a lightweight hypernetwork that generates personalized model parameters for efficient client-specific modeling. By jointly optimizing representation learning and model personalization, FedSheafHN achieves significant improvements over state-of-the-art baselines across multiple graph benchmarks. Empirically, it demonstrates faster convergence, superior generalization, and strong adaptability to unseen clientsโwithout increasing communication overhead or compromising privacy guarantees.
๐ Abstract
Graph-structured data is prevalent in many applications. In subgraph federated learning (FL), this data is distributed across clients, each with a local subgraph. Personalized subgraph FL aims to develop a customized model for each client to handle diverse data distributions. However, performance variation across clients remains a key issue due to the heterogeneity of local subgraphs. To overcome the challenge, we propose FedSheafHN, a novel framework built on a sheaf collaboration mechanism to unify enhanced client descriptors with efficient personalized model generation. Specifically, FedSheafHN embeds each client's local subgraph into a server-constructed collaboration graph by leveraging graph-level embeddings and employing sheaf diffusion within the collaboration graph to enrich client representations. Subsequently, FedSheafHN generates customized client models via a server-optimized hypernetwork. Empirical evaluations demonstrate that FedSheafHN outperforms existing personalized subgraph FL methods on various graph datasets. Additionally, it exhibits fast model convergence and effectively generalizes to new clients.