🤖 AI Summary
Current research on Mixture-of-Experts (MoE) architectures predominantly emphasizes performance gains, while lacking systematic understanding of internal mechanisms—including parameter utilization, expert collaboration, and training dynamics.
Method: We propose a unified framework for routing analysis and expert behavioral modeling, introducing the Model Utilization Index (MUI), a novel quantitative metric. Our approach integrates expert-level behavioral quantification, fine-grained routing tracing, neuron activation modeling, and cross-model systematic comparison.
Contribution/Results: We uncover three fundamental principles: (i) neuron utilization follows a pronounced power-law decay; (ii) tasks are typically solved through collaborative activation of multiple experts; and (iii) activation patterns strongly correlate with input data diversity. Experiments demonstrate that MUI effectively characterizes MoE capacity allocation, dynamic evolution, and expert specialization—providing both theoretical foundations and practical tools for interpretable design and efficient architectural optimization.
📝 Abstract
Mixture-of-Experts (MoE) architectures have emerged as a promising direction, offering efficiency and scalability by activating only a subset of parameters during inference. However, current research remains largely performance-centric, with limited understanding of its internal mechanisms, thereby constraining broader progress. In this work, we use an internal metric to investigate the mechanisms of MoE architecture by explicitly incorporating routing mechanisms and analyzing expert-level behaviors. Through systematic analyses of a wide range of publicly available MoE models, we uncover several findings: (1) neuron utilization decreases as models evolve, reflecting stronger generalization; (2) training exhibits a dynamic trajectory, where benchmark performance alone provides limited signal while MUI reveals deeper insights; (3) task completion emerges from collaborative contributions of multiple experts, with shared experts driving concentration; and (4) activation patterns at the neuron level provide a fine-grained proxy for data diversity. Together, these results demonstrate the potential of MUI as a complementary indicator to benchmark performance, offering new insights into the capacity, dynamics, and specialization of MoE models. Our project can be found at https://yingjiahao14.github.io/MoE-MUI/.