π€ AI Summary
This work investigates the trade-off between model expressivity and generalization performance in Mixture-of-Experts (MoE) architectures under communication constraints. For the first time, rate-distortion theory is introduced into MoE analysis by modeling the gating mechanism as a stochastic channel operating at a finite rate. By integrating mutual information-based generalization bounds with the rate-distortion function \(D(R_g)\), the study establishes a quantitative relationship between the gating communication rate and generalization error. A theoretical upper bound on generalization error is derived and validated through synthetic multi-expert model simulations, which demonstrate that reducing the gating rate, while limiting expressivity, can enhance generalization. Based on these insights, the paper proposes a capacity-aware design principle for MoE systems, offering theoretical guidance for efficient model construction in resource-constrained settings.
π Abstract
Mixture-of-Experts (MoE) architectures decompose prediction tasks into specialized expert sub-networks selected by a gating mechanism. This letter adopts a communication-theoretic view of MoE gating, modeling the gate as a stochastic channel operating under a finite information rate. Within an information-theoretic learning framework, we specialize a mutual-information generalization bound and develop a rate-distortion characterization $D(R_g)$ of finite-rate gating, where $R_g:=I(X; T)$, yielding (under a standard empirical rate-distortion optimality condition) $\mathbb{E}[R(W)] \le D(R_g)+Ξ΄_m+\sqrt{(2/m)\, I(S; W)}$. The analysis yields capacity-aware limits for communication-constrained MoE systems, and numerical simulations on synthetic multi-expert models empirically confirm the predicted trade-offs between gating rate, expressivity, and generalization.