MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MoE models suffer severe accuracy degradation under ultra-low-bit quantization (e.g., 3-bit), hindering their efficient deployment on edge devices. Method: This paper proposes a lightweight low-rank compensation hybrid architecture tailored for MoE’s dense-sparse mixture structure. It introduces (1) the first hybrid low-rank compensation mechanism explicitly designed for MoE’s dual-path topology; (2) adaptive rank allocation coupled with data-free iterative calibration for efficient parameter recovery; and (3) a Tensor Core–optimized custom 3-bit tensor kernel integrated with sparse routing co-optimization. Contribution/Results: The method incurs negligible memory overhead while achieving near–full-precision accuracy for 3-bit quantized state-of-the-art MoE models across multiple benchmarks. It significantly reduces inference latency, enabling practical, high-efficiency edge deployment of large-parameter MoE models.

Technology Category

Application Category

📝 Abstract
A critical approach for efficiently deploying Mixture-of-Experts (MoE) models with massive parameters is quantization. However, state-of-the-art MoE models suffer from non-negligible accuracy loss with extreme quantization, such as under 4 bits. To address this, we introduce MiLo, a novel method that augments highly quantized MoEs with a mixture of low-rank compensators. These compensators consume only a small amount of additional memory but significantly recover accuracy loss from extreme quantization. MiLo also identifies that MoEmodels exhibit distinctive characteristics across weights due to their hybrid dense-sparse architectures, and employs adaptive rank selection policies along with iterative optimizations to close the accuracy gap. MiLo does not rely on calibration data, allowing it to generalize to different MoE models and datasets without overfitting to a calibration set. To avoid the hardware inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit quantized MoE models. Our evaluation shows that MiLo outperforms existing methods on SoTA MoE models across various tasks.
Problem

Research questions and friction points this paper is trying to address.

Recover accuracy loss in extreme quantized MoE models
Adaptive rank selection for hybrid dense-sparse architectures
Enable efficient 3-bit quantized MoE inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses low-rank compensators for accuracy recovery
Adapts rank selection for hybrid dense-sparse architectures
Implements Tensor Core-friendly 3-bit kernels
🔎 Similar Papers
No similar papers found.
Beichen Huang
Beichen Huang
Hong Kong Polytechnic University
Y
Yueming Yuan
SSAIL Lab, Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, United States
Z
Zelei Shao
SSAIL Lab, Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, United States
Minjia Zhang
Minjia Zhang
University of Illinois at Urbana-Champagin
ParallelismMachine Learning SystemsModel CompressionLLM Application