🤖 AI Summary
This work addresses the computational bottleneck in high-dimensional semidefinite programming (SDP) caused by full-rank singular value decomposition (SVD) during projection onto the positive semidefinite cone. We focus on convex optimization problems—featuring smooth or nonsmooth objectives and linear or nonlinear smooth convex constraints—that admit low-rank solutions and satisfy low-rank complementarity conditions. We propose the Low-Rank Extragradient Method (LREM), which replaces full-rank SVD with low-rank SVD for semidefinite cone projections, provided an approximately optimal initial point is available. We establish, for the first time, a rigorous proof that LREM preserves the standard convergence rate of its full-rank counterpart. Our theoretical analysis identifies complementarity as the key condition enabling feasible low-rank projection. Empirical evaluation on benchmark problems—including Max-Cut—demonstrates substantial reductions in memory usage and computational cost, while maintaining solution accuracy and convergence guarantees, thereby enhancing the scalability of SDP solvers.
📝 Abstract
We consider several classes of highly important semidefinite optimization problems that involve both a convex objective function (smooth or nonsmooth) and additional linear or nonlinear smooth and convex constraints, which are ubiquitous in statistics, machine learning, combinatorial optimization, and other domains. We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution which also satisfies a low-rank complementarity condition. We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method, when initialized in the proximity of an optimal primal-dual solution, converges to a solution of the constrained optimization problem with its standard convergence rates guarantees, using only low-rank singular value decompositions (SVD) to project onto the positive semidefinite cone, as opposed to computationally-prohibitive full-rank SVDs required in worst-case. Our approach is supported by numerical experiments conducted with a dataset of Max-Cut instances.