🤖 AI Summary
This study investigates the differences in convergence and stability between sequential and parallel coordinate ascent variational inference (CAVI) in moderate-to-high-dimensional linear regression models. By integrating numerical analysis with optimization theory, the work systematically compares the convergence behavior of these two algorithmic variants. The analysis reveals that sequential CAVI enjoys convergence guarantees under substantially milder conditions, whereas parallel CAVI, despite its superior computational efficiency, requires stricter assumptions on the model structure. This research addresses a notable gap in the theoretical understanding of how update strategies in variational inference affect convergence criteria, thereby providing a principled foundation for selecting appropriate algorithms in practical applications.
📝 Abstract
We highlight a striking difference in behavior between two widely used variants of coordinate ascent variational inference: the sequential and parallel algorithms. While such differences were known in the numerical analysis literature in simpler settings, they remain largely unexplored in the optimization-focused literature on variational inference in more complex models. Focusing on the moderately high-dimensional linear regression problem, we show that the sequential algorithm, although typically slower, enjoys convergence guarantees under more relaxed conditions than the parallel variant, which is often employed to facilitate block-wise updates and improve computational efficiency.