π€ AI Summary
MoE model inference on consumer-grade GPUs is bottlenecked by limited CPU-GPU bandwidth, while existing prefetching techniques suffer from low efficiency under fine-grained expert partitioning and incur substantial training overhead. To address these challenges, this paper proposes MoBiLEβa plug-and-play hybrid expert inference framework. Its core innovations are: (1) a size-aware expert collaboration mechanism that routes non-critical tokens to lightweight small experts for acceleration while reserving large experts for critical tokens to preserve accuracy; and (2) an integrated strategy combining expert offloading, dynamic prefetching, and fine-grained scheduling to optimize memory switching and data transfer. Evaluated on four mainstream MoE models, MoBiLE achieves 1.60Γβ1.72Γ inference speedup with negligible accuracy degradation, significantly outperforming state-of-the-art offloading approaches.
π Abstract
Mixture-of-Experts (MoE) models have recently demonstrated exceptional performance across a diverse range of applications. The principle of sparse activation in MoE models facilitates an offloading strategy, wherein active experts are maintained in GPU HBM, while inactive experts are stored in CPU DRAM. The efficacy of this approach, however, is fundamentally constrained by the limited bandwidth of the CPU-GPU interconnect. To mitigate this bottleneck, existing approaches have employed prefetching to accelerate MoE inference. These methods attempt to predict and prefetch the required experts using specially trained modules. Nevertheless, such techniques are often encumbered by significant training overhead and have shown diminished effectiveness on recent MoE models with fine-grained expert segmentation.
In this paper, we propose MoBiLE, a plug-and-play offloading-based MoE inference framework with extit{mixture of big-little experts}. It reduces the number of experts for unimportant tokens to half for acceleration while maintaining full experts for important tokens to guarantee model quality. Further, a dedicated fallback and prefetching mechanism is designed for switching between little and big experts to improve memory efficiency. We evaluate MoBiLE on four typical modern MoE architectures and challenging generative tasks. Our results show that MoBiLE achieves a speedup of 1.60x to 1.72x compared to the baseline on a consumer GPU system, with negligible degradation in accuracy.