Provably data-driven projection method for quadratic programming

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Scalability of high-dimensional convex quadratic programming (QP) remains a fundamental challenge. Unlike linear programming, QP optimal solutions do not lie at vertices, rendering standard value-function modeling infeasible. Method: We propose the first data-driven projection method for QP with provable generalization guarantees. By learning a low-dimensional projection matrix from a task distribution, we reduce problem dimensionality and solve the projected QP efficiently. To model the optimal value function—despite its non-vertex nature—we leverage Carathéodory’s theorem to localize the solution region and design an *unfolding active-set algorithm* that achieves bounded computational complexity. We further integrate the Goldberg–Jerrum framework with statistical learning theory to derive a generalization error bound for the learned projection matrix. Results: Experiments across diverse settings demonstrate significant improvements in both solving efficiency and scalability of QP, validating the theoretical guarantees and practical efficacy of our approach.

Technology Category

Application Category

📝 Abstract
Projection methods aim to reduce the dimensionality of the optimization instance, thereby improving the scalability of high-dimensional problems. Recently, Sakaue and Oki proposed a data-driven approach for linear programs (LPs), where the projection matrix is learned from observed problem instances drawn from an application-specific distribution of problems. We analyze the generalization guarantee for the data-driven projection matrix learning for convex quadratic programs (QPs). Unlike in LPs, the optimal solutions of convex QPs are not confined to the vertices of the feasible polyhedron, and this complicates the analysis of the optimal value function. To overcome this challenge, we demonstrate that the solutions of convex QPs can be localized within a feasible region corresponding to a special active set, utilizing Caratheodory's theorem. Building on such observation, we propose the unrolled active set method, which models the computation of the optimal value as a Goldberg-Jerrum (GJ) algorithm with bounded complexities, thereby establishing learning guarantees. We then further extend our analysis to other settings, including learning to match the optimal solution and input-aware setting, where we learn a mapping from QP problem instances to projection matrices.
Problem

Research questions and friction points this paper is trying to address.

Develops data-driven projection method for quadratic programming scalability
Analyzes generalization guarantees for convex QPs using active sets
Extends analysis to solution matching and input-aware projection learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven projection matrix learning for QPs
Unrolled active set method with bounded complexities
Mapping from QP instances to projection matrices