🤖 AI Summary
This paper addresses smooth convex optimization over the spectrahedron—the set of unit-trace symmetric positive semidefinite matrices—where classical Frank–Wolfe (FW) methods suffer from high computational cost (due to frequent high-rank SVDs or matrix inversions) and slow worst-case convergence in high dimensions. We propose the first FW-type algorithm with provable linear convergence: under quadratic growth and strict complementarity, it achieves dimension-independent expected linear convergence after a finite number of iterations. The method employs rank-one updates, adaptive step sizes, and conditional gradient decomposition—bypassing expensive high-rank operations—and reduces per-iteration complexity to $O(n^2)$. We provide rigorous theoretical analysis and demonstrate empirically that our algorithm significantly outperforms standard FW and projected gradient methods on large-scale covariance estimation and low-rank matrix recovery tasks.
📝 Abstract
We consider the problem of minimizing a smooth and convex function over the $n$-dimensional spectrahedron -- the set of real symmetric $n imes n$ positive semidefinite matrices with unit trace, which underlies numerous applications in statistics, machine learning and additional domains. Standard first-order methods often require high-rank matrix computations which are prohibitive when the dimension $n$ is large. The well-known Frank-Wolfe method on the other hand, only requires efficient rank-one matrix computations, however suffers from worst-case slow convergence, even under conditions that enable linear convergence rates for standard methods. In this work we present the first Frank-Wolfe-based algorithm that only applies efficient rank-one matrix computations and, assuming quadratic growth and strict complementarity conditions, is guaranteed, after a finite number of iterations, to converges linearly, in expectation, and independently of the ambient dimension.