A Linearly Convergent Frank-Wolfe-type Method for Smooth Convex Minimization over the Spectrahedron

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses smooth convex optimization over the spectrahedron—the set of unit-trace symmetric positive semidefinite matrices—where classical Frank–Wolfe (FW) methods suffer from high computational cost (due to frequent high-rank SVDs or matrix inversions) and slow worst-case convergence in high dimensions. We propose the first FW-type algorithm with provable linear convergence: under quadratic growth and strict complementarity, it achieves dimension-independent expected linear convergence after a finite number of iterations. The method employs rank-one updates, adaptive step sizes, and conditional gradient decomposition—bypassing expensive high-rank operations—and reduces per-iteration complexity to $O(n^2)$. We provide rigorous theoretical analysis and demonstrate empirically that our algorithm significantly outperforms standard FW and projected gradient methods on large-scale covariance estimation and low-rank matrix recovery tasks.

Technology Category

Application Category

📝 Abstract
We consider the problem of minimizing a smooth and convex function over the $n$-dimensional spectrahedron -- the set of real symmetric $n imes n$ positive semidefinite matrices with unit trace, which underlies numerous applications in statistics, machine learning and additional domains. Standard first-order methods often require high-rank matrix computations which are prohibitive when the dimension $n$ is large. The well-known Frank-Wolfe method on the other hand, only requires efficient rank-one matrix computations, however suffers from worst-case slow convergence, even under conditions that enable linear convergence rates for standard methods. In this work we present the first Frank-Wolfe-based algorithm that only applies efficient rank-one matrix computations and, assuming quadratic growth and strict complementarity conditions, is guaranteed, after a finite number of iterations, to converges linearly, in expectation, and independently of the ambient dimension.
Problem

Research questions and friction points this paper is trying to address.

Minimizing smooth convex functions over spectrahedron efficiently.
Addressing high-rank matrix computation issues in large dimensions.
Achieving linear convergence with rank-one matrix computations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frank-Wolfe-based algorithm with rank-one computations
Linear convergence under quadratic growth conditions
Dimension-independent convergence in expectation
🔎 Similar Papers
No similar papers found.