🤖 AI Summary
To address the challenge of training large models on resource-constrained clients in federated learning, this paper proposes a distributed training framework based on a sparse Mixture-of-Experts (MoE) architecture. The method introduces two key innovations: (1) a domain-aware, fine-grained expert aggregation mechanism that jointly models intra-client expert correlations and inter-client data heterogeneity; and (2) peer-to-peer selective expert synchronization across clients, substantially reducing server-side communication overhead. By preserving model personalization and robustness, the approach improves both communication efficiency and convergence speed. Extensive experiments on multiple heterogeneous benchmarks demonstrate significant improvements: average accuracy gains, a 47% reduction in server communication volume, 23% faster convergence, and superior personalized performance compared to state-of-the-art federated learning methods.
📝 Abstract
Federated learning (FL) is a collaborative machine learning approach that enables multiple clients to train models without sharing their private data. With the rise of deep learning, large-scale models have garnered significant attention due to their exceptional performance. However, a key challenge in FL is the limitation imposed by clients with constrained computational and communication resources, which hampers the deployment of these large models. The Mixture of Experts (MoE) architecture addresses this challenge with its sparse activation property, which reduces computational workload and communication demands during inference and updates. Additionally, MoE facilitates better personalization by allowing each expert to specialize in different subsets of the data distribution. To alleviate the communication burdens between the server and clients, we propose FedMoE-DA, a new FL model training framework that leverages the MoE architecture and incorporates a novel domain-aware, fine-grained aggregation strategy to enhance the robustness, personalizability, and communication efficiency simultaneously. Specifically, the correlation between both intra-client expert models and inter-client data heterogeneity is exploited. Moreover, we utilize peer-to-peer (P2P) communication between clients for selective expert model synchronization, thus significantly reducing the server-client transmissions. Experiments demonstrate that our FedMoE-DA achieves excellent performance while reducing the communication pressure on the server.