Accelerating Mixture-of-Experts Training with Adaptive Expert Replication

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In MoE model training, dynamic routing causes severe expert load imbalance; existing solutions either discard tokens—harming convergence—or frequently migrate experts—incuring high state-management overhead. This paper introduces a novel paradigm that decouples optimizer state distribution from expert parameter placement: optimizer states are statically partitioned across devices to eliminate migration overhead, while weight updates are dynamically reused to enable zero-overhead, per-iteration GPU resource elasticity. The approach integrates adaptive expert replication, distributed optimizer sharding, update-driven parameter remapping, and load-aware routing for coordinated scheduling. Without token dropping or accuracy degradation, it achieves 30.5% and 25.9% faster convergence than DeepSpeed-MoE and FlexMoE, respectively. To our knowledge, this is the first method enabling fine-grained, zero-overhead, adaptive resource allocation for MoE training.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models have become a widely adopted solution to continue scaling model sizes without a corresponding linear increase in compute. During MoE model training, each input token is dynamically routed to a subset of experts -- sparsely-activated feed-forward networks -- within each transformer layer. The distribution of tokens assigned to each expert varies widely and rapidly over the course of training. To handle the wide load imbalance across experts, current systems are forced to either drop tokens assigned to popular experts, degrading convergence, or frequently rebalance resources allocated to each expert based on popularity, incurring high state migration overheads. To break this performance-accuracy tradeoff, we introduce SwiftMoE, an adaptive MoE training system. The key insight of SwiftMoE is to decouple the placement of expert parameters from their large optimizer state. SwiftMoE statically partitions the optimizer of each expert across all training nodes. Meanwhile, SwiftMoE dynamically adjusts the placement of expert parameters by repurposing existing weight updates, avoiding migration overheads. In doing so, SwiftMoE right-sizes the GPU resources allocated to each expert, on a per-iteration basis, with minimal overheads. Compared to state-of-the-art MoE training systems, DeepSpeed and FlexMoE, SwiftMoE is able to achieve a 30.5% and 25.9% faster time-to-convergence, respectively.
Problem

Research questions and friction points this paper is trying to address.

Handles load imbalance in MoE training without token dropping
Reduces state migration overheads in dynamic expert resource allocation
Improves training efficiency by decoupling expert and optimizer placement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples expert parameters from optimizer state
Statically partitions optimizer across all nodes
Dynamically adjusts expert placement without migration
A
Athinagoras Skiadopoulos
Stanford University
Mark Zhao
Mark Zhao
University of Colorado Boulder
Computer SystemsSystems for MLCloud Computing
S
Swapnil Gandhi
Stanford University
T
Thomas Norrie
OpenAI
S
Shrijeet Mukherjee
Enfabrica
Christos Kozyrakis
Christos Kozyrakis
Stanford University
Computer ArchitectureComputer SystemsCloud Computing