Learning large softmax mixtures with warm start EM

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the theoretical foundations and initialization strategies for the Expectation-Maximization (EM) algorithm in high-dimensional Soft Maximum Mixtures (SMMs). Addressing bottlenecks—slow EM convergence and unstable moment estimation—in large-$p$ (high-dimensional) and large-$N$ (large-sample) regimes, we propose a two-stage “moment estimation + EM” framework. First, we construct the first theoretically guaranteed moment-based estimator for latent variables in SMMs; second, we provide the first global convergence analysis of EM for this model. Crucially, we employ the moment estimator as a warm start for EM, unifying statistical consistency and computational efficiency. We prove that the resulting estimator achieves consistent parameter recovery in polynomial time, with explicit error bounds and asymptotic normality. Empirical evaluations demonstrate substantially faster convergence and superior numerical stability compared to standard EM and random initialization.

Technology Category

Application Category

📝 Abstract
Mixed multinomial logits are discrete mixtures introduced several decades ago to model the probability of choosing an attribute from $p$ possible candidates, in heterogeneous populations. The model has recently attracted attention in the AI literature, under the name softmax mixtures, where it is routinely used in the final layer of a neural network to map a large number $p$ of vectors in $mathbb{R}^L$ to a probability vector. Despite its wide applicability and empirical success, statistically optimal estimators of the mixture parameters, obtained via algorithms whose running time scales polynomially in $L$, are not known. This paper provides a solution to this problem for contemporary applications, such as large language models, in which the mixture has a large number $p$ of support points, and the size $N$ of the sample observed from the mixture is also large. Our proposed estimator combines two classical estimators, obtained respectively via a method of moments (MoM) and the expectation-minimization (EM) algorithm. Although both estimator types have been studied, from a theoretical perspective, for Gaussian mixtures, no similar results exist for softmax mixtures for either procedure. We develop a new MoM parameter estimator based on latent moment estimation that is tailored to our model, and provide the first theoretical analysis for a MoM-based procedure in softmax mixtures. Although consistent, MoM for softmax mixtures can exhibit poor numerical performance, as observed other mixture models. Nevertheless, as MoM is provably in a neighborhood of the target, it can be used as warm start for any iterative algorithm. We study in detail the EM algorithm, and provide its first theoretical analysis for softmax mixtures. Our final proposal for parameter estimation is the EM algorithm with a MoM warm start.
Problem

Research questions and friction points this paper is trying to address.

Analyzing EM algorithm for high-dimensional softmax mixtures
Proving local and full identifiability in SSMs with random features
Developing warm start methods for EM initialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

EM algorithm for high-dimensional softmax mixtures
Warm start EM with moments estimation method
Method of moments estimator for mixture parameters
🔎 Similar Papers
No similar papers found.