🤖 AI Summary
This work addresses the pervasive mode collapse issue in mean-field variational inference (MFVI) when approximating multimodal distributions. We first provide a theoretical characterization of this phenomenon: for a bimodal target distribution π = wP₀ + (1−w)P₁, MFVI solutions inherently concentrate all probability mass on a single mode. To quantify the geometric separability between mixture components, we introduce the ε-separability condition and derive explicit theoretical bounds on modal mass allocation under MFVI. Building upon this insight, we propose Rotation-based Variational Inference (RoVI), a method that augments MFVI with orthogonal rotation transformations—preserving the mean-field structure while enhancing multimodal coverage. Both theoretical analysis and empirical experiments demonstrate that RoVI substantially mitigates mode collapse and improves approximation accuracy. Our work establishes a new analytical framework for diagnosing and improving MFVI, offering principled guidance for designing expressive yet tractable variational families.
📝 Abstract
Mean-field variational inference (MFVI) is a widely used method for approximating high-dimensional probability distributions by product measures. It has been empirically observed that MFVI optimizers often suffer from mode collapse. Specifically, when the target measure $π$ is a mixture $π= w P_0 + (1 - w) P_1$, the MFVI optimizer tends to place most of its mass near a single component of the mixture. This work provides the first theoretical explanation of mode collapse in MFVI. We introduce the notion to capture the separatedness of the two mixture components -- called $varepsilon$-separateness -- and derive explicit bounds on the fraction of mass that any MFVI optimizer assigns to each component when $P_0$ and $P_1$ are $varepsilon$-separated for sufficiently small $varepsilon$. Our results suggest that the occurrence of mode collapse crucially depends on the relative position of the components. To address this issue, we propose the rotational variational inference (RoVI), which augments MFVI with a rotation matrix. The numerical studies support our theoretical findings and demonstrate the benefits of RoVI.