🤖 AI Summary
To address storage and memory bottlenecks hindering the deployment of ultra-large-scale Mixture-of-Experts (MoE) models—such as the trillion-parameter DeepSeek-V3—on resource-constrained edge devices, this work proposes a holistic compression framework that jointly optimizes expert pruning, mixed-precision quantization, and activation optimization—the first such integrated approach. By breaking away from conventional single-paradigm compression, it mitigates accuracy and output quality degradation under high compression ratios. Experiments demonstrate a reduction in model storage footprint from 1.3 TB to 103 GB, enabling successful deployment on a 128-GB-memory-limited edge platform. Moreover, compared to uniform low-bit quantization, our method achieves higher benchmark accuracy at smaller model sizes. This synergy significantly enhances both practicality and energy efficiency of MoE models on edge devices.
📝 Abstract
The Mixture of Experts (MoE) architecture is an important method for scaling Large Language Models (LLMs). It increases model capacity while keeping computation cost low. However, the ultra-large MoE models still have hundreds of billions of parameters, requiring massive memory/storage and leading to difficulties for deployment on resource-constrained edge platforms. Pruning or quantization alone can hardly address the issue, because of the super-aggressive compression ratio with significantly degraded accuracy and output quality. To facilitate the deployment of ultra-large MoEs on edge platforms, we propose a collaborative compression framework by combining expert pruning, mixed-precision quantization, and activation optimization. It can effectively reduce the storage footprint of the ultra-large MoE DeepSeek-V3 from 1.3TB to 103GB, while preserving high output quality with better accuracy than traditional uniform low-bit quantization methods. To the best of our knowledge, we are the first to deploy a compressed model from the ultra-large DeepSeek-V3 on the platform with a strict 128GB total memory limit. Our comprehensive experiments on multiple benchmarks under various memory constraints demonstrate the effectiveness of our method with smaller model sizes and higher accuracy than uniform low-bit quantization methods.