A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently and robustly differentiating through quadratic programming (QP) solutions in differentiable optimization, particularly in large-scale settings where conventional KKT-based approaches suffer from high computational cost and numerical instability. The authors propose dXPP, a solver-agnostic differentiable QP framework that decouples solving and differentiation via a penalty method. In the forward pass, dXPP leverages any black-box QP solver, while in the backward pass, it computes gradients by implicitly differentiating a smooth approximation of the penalized problem, requiring only the solution of a smaller linear system in primal variables. As the first solver-independent differentiable QP framework, dXPP avoids explicit construction of the KKT system, achieving significant gains in both computational efficiency and numerical stability for large-scale problems without sacrificing accuracy. Experiments on random QPs, sparse projections, and multi-period portfolio optimization demonstrate substantial speedups over KKT-based methods while maintaining competitive solution accuracy.

Technology Category

Application Category

📝 Abstract
Differentiating through the solution of a quadratic program (QP) is a central problem in differentiable optimization. Most existing approaches differentiate through the Karush--Kuhn--Tucker (KKT) system, but their computational cost and numerical robustness can degrade at scale. To address these limitations, we propose dXPP, a penalty-based differentiation framework that decouples QP solving from differentiation. In the solving step (forward pass), dXPP is solver-agnostic and can leverage any black-box QP solver. In the differentiation step (backward pass), we map the solution to a smooth approximate penalty problem and implicitly differentiate through it, requiring only the solution of a much smaller linear system in the primal variables. This approach bypasses the difficulties inherent in explicit KKT differentiation and significantly improves computational efficiency and robustness. We evaluate dXPP on various tasks, including randomly generated QPs, large-scale sparse projection problems, and a real-world multi-period portfolio optimization task. Empirical results demonstrate that dXPP is competitive with KKT-based differentiation methods and achieves substantial speedups on large-scale problems.
Problem

Research questions and friction points this paper is trying to address.

differentiable optimization
quadratic programming
black-box solver
KKT system
implicit differentiation
Innovation

Methods, ideas, or system contributions that make the work stand out.

differentiable optimization
quadratic programming
penalty method
implicit differentiation
black-box solver
🔎 Similar Papers
No similar papers found.
Y
Yuxuan Linghu
Shanghai Jiao Tong University
Z
Zhiyuan Liu
The University of Chicago
Qi Deng
Qi Deng
Antai College of Economics & Management, Shanghai Jiao Tong University
optimizationmachine learning