🤖 AI Summary
Automatic selection of the number of experts in Gaussian-gated Gaussian mixture-of-experts (MoE) models with covariates remains challenging due to strong parameter coupling induced by covariate-dependent gating.
Method: We propose a search-free model selection framework based on a novel *mixture-measure dendrogram*, extended for the first time to Gaussian-gated MoE. By jointly modeling the gating and expert structures, our approach mitigates parameter entanglement.
Contribution/Results: We establish theoretical consistency: under overfitting, the method consistently estimates the true number of experts; moreover, it achieves pointwise optimal convergence rates for parameter estimation and attains the minimax lower bound for regression function approximation error. Empirically, it significantly outperforms classical criteria—including AIC, BIC, and ICL—on synthetic benchmarks. The framework provides an interpretable, computationally efficient, and statistically rigorous solution for automatic architecture selection in high-dimensional and deep MoE models.
📝 Abstract
Mixture of Experts (MoE) models constitute a widely utilized class of ensemble learning approaches in statistics and machine learning, known for their flexibility and computational efficiency. They have become integral components in numerous state-of-the-art deep neural network architectures, particularly for analyzing heterogeneous data across diverse domains. Despite their practical success, the theoretical understanding of model selection, especially concerning the optimal number of mixture components or experts, remains limited and poses significant challenges. These challenges primarily stem from the inclusion of covariates in both the Gaussian gating functions and expert networks, which introduces intrinsic interactions governed by partial differential equations with respect to their parameters. In this paper, we revisit the concept of dendrograms of mixing measures and introduce a novel extension to Gaussian-gated Gaussian MoE models that enables consistent estimation of the true number of mixture components and achieves the pointwise optimal convergence rate for parameter estimation in overfitted scenarios. Notably, this approach circumvents the need to train and compare a range of models with varying numbers of components, thereby alleviating the computational burden, particularly in high-dimensional or deep neural network settings. Experimental results on synthetic data demonstrate the effectiveness of the proposed method in accurately recovering the number of experts. It outperforms common criteria such as the Akaike information criterion, the Bayesian information criterion, and the integrated completed likelihood, while achieving optimal convergence rates for parameter estimation and accurately approximating the regression function.