A Scalable Gradient-Based Optimization Framework for Sparse Minimum-Variance Portfolio Selection

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the high-dimensional sparse minimum-variance portfolio selection problem—selecting $k$ assets from $p$ candidates to minimize portfolio variance. We propose a gradient-based continuous relaxation framework, departing from computationally expensive mixed-integer programming (MIP) approaches. Our key innovation is a novel Boolean relaxation coupled with a tunable concave–convex parametrization, enabling exact reformulation of the discrete combinatorial optimization problem as a continuous differentiable one. By designing a smoothly transitioned objective function and an effective constraint-handling strategy, the algorithm achieves scalable, high-accuracy solutions. Empirical evaluation shows that, on most test instances, our method recovers portfolios identical to those obtained by commercial MIP solvers; in divergent cases, the variance deviation is negligible. Crucially, our approach delivers substantial speedups—orders of magnitude faster—while preserving solution fidelity.

Technology Category

Application Category

📝 Abstract
Portfolio optimization involves selecting asset weights to minimize a risk-reward objective, such as the portfolio variance in the classical minimum-variance framework. Sparse portfolio selection extends this by imposing a cardinality constraint: only $k$ assets from a universe of $p$ may be included. The standard approach models this problem as a mixed-integer quadratic program and relies on commercial solvers to find the optimal solution. However, the computational costs of such methods increase exponentially with $k$ and $p$, making them too slow for problems of even moderate size. We propose a fast and scalable gradient-based approach that transforms the combinatorial sparse selection problem into a constrained continuous optimization task via Boolean relaxation, while preserving equivalence with the original problem on the set of binary points. Our algorithm employs a tunable parameter that transmutes the auxiliary objective from a convex to a concave function. This allows a stable convex starting point, followed by a controlled path toward a sparse binary solution as the tuning parameter increases and the objective moves toward concavity. In practice, our method matches commercial solvers in asset selection for most instances and, in rare instances, the solution differs by a few assets whilst showing a negligible error in portfolio variance.
Problem

Research questions and friction points this paper is trying to address.

Sparse portfolio selection with cardinality constraints
Scalable optimization for large asset universes
Gradient-based Boolean relaxation for binary solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based optimization for sparse portfolios
Boolean relaxation transforms combinatorial to continuous
Tunable parameter controls convex to concave transition
🔎 Similar Papers
No similar papers found.