🤖 AI Summary
This paper addresses I/O-computation mismatch and pipeline bubbles in MoE large-model inference caused by imbalanced expert activation. It proposes an expert-aware multi-batch pipelining paradigm. Key contributions are: (1) the first constraint-sensitive I/O-computation co-scheduler, enabling heterogeneous temporal load balancing; (2) a correlation-aware expert prefetcher that supports dynamic batching and hierarchical CPU/disk offloading based on activation pattern modeling; and (3) computation–loading overlap optimization to significantly reduce pipeline bubbles. Experiments demonstrate up to 85.12× higher throughput over state-of-the-art methods, while substantially improving the throughput–latency trade-off under stringent latency constraints.
📝 Abstract
Mixture of Experts (MoE), with its distinctive sparse structure, enables the scaling of language models up to trillions of parameters without significantly increasing computational costs. However, the substantial parameter size presents a challenge for inference, as the expansion in GPU memory cannot keep pace with the growth in parameters. Although offloading techniques utilise memory from the CPU and disk and parallelise the I/O and computation for efficiency, the computation for each expert in MoE models is often less than the I/O, resulting in numerous bubbles in the pipeline. Therefore, we propose Klotski, an efficient MoE inference engine that significantly reduces pipeline bubbles through a novel expert-aware multi-batch pipeline paradigm. The proposed paradigm uses batch processing to extend the computation time of the current layer to overlap with the loading time of the next layer. Although this idea has been effectively applied to dense models, more batches may activate more experts in the MoE, leading to longer loading times and more bubbles. Thus, unlike traditional approaches, we balance computation and I/O time and minimise bubbles by orchestrating their inference orders based on their heterogeneous computation and I/O requirements and activation patterns under different batch numbers. Moreover, to adapt to different hardware environments and models, we design a constraint-sensitive I/O-compute planner and a correlation-aware expert prefetcher for a schedule that minimises pipeline bubbles. Experimental results demonstrate that Klotski achieves a superior throughput-latency trade-off compared to state-of-the-art techniques, with throughput improvements of up to 85.12x.