🤖 AI Summary
This work addresses the high computational cost, memory footprint, and energy consumption associated with training and inference in Mixture-of-Experts large language models (MoE-LLMs) by proposing MoEITS, an information-theoretic expert pruning algorithm. MoEITS leverages an information-theory-driven simplification mechanism to substantially reduce computational and memory overhead while preserving model performance. Comprehensive theoretical analysis and systematic experiments demonstrate that MoEITS consistently outperforms existing pruning methods across prominent MoE architectures—including Mixtral, Qwen1.5, and DeepSeek-V2-Lite—achieving an effective trade-off between accuracy and efficiency. The approach thus enables the development of high-performance, energy-efficient, and lightweight MoE-LLMs.
📝 Abstract
Large language models are transforming all areas of academia and industry, attracting the attention of researchers, professionals, and the general public. In the trek for more powerful architectures, Mixture-of-Experts, inspired by ensemble models, have emerged as one of the most effective ways to follow. However, this implies a high computational burden for both training and inference. To reduce the impact on computing and memory footprint as well as the energy consumption, simplification methods has arisen as very effective procedures.
In this paper, an original algorithm, MoEITS, for MoE-LLMs simplification is presented. The algorithm is characterized by a refined simplicity, underpinned by standardized Information Theoretic frameworks. MoEITS is analyzed in depth from theoretical and practical points of view. Its computational complexity is studied. Its performance on the accuracy of the simplified LLMs and the reduction rate achieved is assessed through a thoroughly designed experimentation. This empirical evaluation includes a comparison with state-of-the-art MoE-LLM pruning methods applied on Mixtral $8\times7$B, Qwen1.5-2.7B, and DeepSeek-V2-Lite. The extensive experimentation conducted demonstrates that MoEITS outperforms state-of-the-art techniques by generating models that are both effective across all benchmarks and computationally efficient.
The code implementing the method will be available at https://github.com/luisbalru/MoEITS.