🤖 AI Summary
To address the excessive storage and memory overhead of Mixture-of-Experts (MoE) large language models, this paper proposes D²-MoE, a training-free compression framework. Our method introduces three key innovations: (1) Fisher-information-weighted base weight fusion, enabling principled expert consolidation; (2) SVD-driven incremental low-rank compression of expert weights; and (3) semi-dynamic structured pruning, balancing input-adaptive routing with compression efficiency. Evaluated on prominent MoE models—including Mixtral, Phi-3.5, DeepSeek-MoE, and Qwen2—D²-MoE achieves 40–60% parameter reduction while preserving model functionality. Crucially, it attains an average accuracy improvement of over 13% compared to state-of-the-art compression baselines, without requiring fine-tuning or retraining. The framework is fully compatible with standard inference pipelines and maintains expert sparsity during deployment. All code and implementation details are publicly released.
📝 Abstract
Mixture-of-Experts (MoE) architectures in large language models (LLMs) achieve exceptional performance, but face prohibitive storage and memory requirements. To address these challenges, we present $D^2$-MoE, a new delta decompression compressor for reducing the parameters of MoE LLMs. Based on observations of expert diversity, we decompose their weights into a shared base weight and unique delta weights. Specifically, our method first merges each expert's weight into the base weight using the Fisher information matrix to capture shared components. Then, we compress delta weights through Singular Value Decomposition (SVD) by exploiting their low-rank properties. Finally, we introduce a semi-dynamical structured pruning strategy for the base weights, combining static and dynamic redundancy analysis to achieve further parameter reduction while maintaining input adaptivity. In this way, our $D^2$-MoE successfully compact MoE LLMs to high compression ratios without additional training. Extensive experiments highlight the superiority of our approach, with over 13% performance gains than other compressors on Mixtral|Phi-3.5|DeepSeek|Qwen2 MoE LLMs at 40$sim$60% compression rates. Codes are available in https://github.com/lliai/D2MoE.