Delta Decompression for MoE-based LLMs Compression

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive storage and memory overhead of Mixture-of-Experts (MoE) large language models, this paper proposes D²-MoE, a training-free compression framework. Our method introduces three key innovations: (1) Fisher-information-weighted base weight fusion, enabling principled expert consolidation; (2) SVD-driven incremental low-rank compression of expert weights; and (3) semi-dynamic structured pruning, balancing input-adaptive routing with compression efficiency. Evaluated on prominent MoE models—including Mixtral, Phi-3.5, DeepSeek-MoE, and Qwen2—D²-MoE achieves 40–60% parameter reduction while preserving model functionality. Crucially, it attains an average accuracy improvement of over 13% compared to state-of-the-art compression baselines, without requiring fine-tuning or retraining. The framework is fully compatible with standard inference pipelines and maintains expert sparsity during deployment. All code and implementation details are publicly released.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) architectures in large language models (LLMs) achieve exceptional performance, but face prohibitive storage and memory requirements. To address these challenges, we present $D^2$-MoE, a new delta decompression compressor for reducing the parameters of MoE LLMs. Based on observations of expert diversity, we decompose their weights into a shared base weight and unique delta weights. Specifically, our method first merges each expert's weight into the base weight using the Fisher information matrix to capture shared components. Then, we compress delta weights through Singular Value Decomposition (SVD) by exploiting their low-rank properties. Finally, we introduce a semi-dynamical structured pruning strategy for the base weights, combining static and dynamic redundancy analysis to achieve further parameter reduction while maintaining input adaptivity. In this way, our $D^2$-MoE successfully compact MoE LLMs to high compression ratios without additional training. Extensive experiments highlight the superiority of our approach, with over 13% performance gains than other compressors on Mixtral|Phi-3.5|DeepSeek|Qwen2 MoE LLMs at 40$sim$60% compression rates. Codes are available in https://github.com/lliai/D2MoE.
Problem

Research questions and friction points this paper is trying to address.

Reduce MoE LLMs storage requirements
Compress MoE LLMs without additional training
Maintain performance at high compression ratios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes weights into base and delta
Compresses delta weights using SVD
Applies semi-dynamical structured pruning strategy
Hao Gu
Hao Gu
Sun Yat-Sen University
Planetary aeronomyAtmospheric escapeSpace physics
W
Wei Li
University of Birmingham
L
Lujun Li
Hong Kong University of Science and Technology
Q
Qiyuan Zhu
Hong Kong University of Science and Technology
Mark Lee
Mark Lee
University of Birmingham
Computer ScienceNatural Language Processing
S
Shengjie Sun
AISpeech Co., Ltd.
W
Wei Xue
Hong Kong University of Science and Technology
Y
Yike Guo
Hong Kong University of Science and Technology