Deep Distributed Optimization for Large-Scale Quadratic Programming

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale quadratic programming (QP) problems—featuring $10^4$–$10^5$ variables and constraints—suffer from severe computational bottlenecks under conventional solvers. Method: We propose DistributedQP, a distributed optimization framework, and DeepDistributedQP, a learnable deep-unrolled model. Our approach uniquely integrates the OSQP operator-splitting scheme with distributed consensus, enabling the first end-to-end deep unrolling of a distributed QP solver. Leveraging PAC-Bayes theory, we derive provably tight generalization bounds and optimality-gap guarantees. Results: Extensive experiments demonstrate that DeepDistributedQP significantly outperforms OSQP, ADMM, and other baselines on random QPs, optimal control, and traffic network optimization. Trained solely on small-scale instances, it generalizes robustly to problems with up to 50K variables and 150K constraints. Wall-clock inference time is accelerated by 1–2 orders of magnitude, while maintaining theoretical convergence guarantees and strong generalization performance.

Technology Category

Application Category

📝 Abstract
Quadratic programming (QP) forms a crucial foundation in optimization, encompassing a broad spectrum of domains and serving as the basis for more advanced algorithms. Consequently, as the scale and complexity of modern applications continue to grow, the development of efficient and reliable QP algorithms is becoming increasingly vital. In this context, this paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale QP problems. First, we combine the state-of-the-art Operator Splitting QP (OSQP) method with a consensus approach to derive DistributedQP, a new method tailored for network-structured problems, with convergence guarantees to optimality. Subsequently, we unfold this optimizer into a deep learning framework, leading to DeepDistributedQP, which leverages learned policies to accelerate reaching to desired accuracy within a restricted amount of iterations. Our approach is also theoretically grounded through Probably Approximately Correct (PAC)-Bayes theory, providing generalization bounds on the expected optimality gap for unseen problems. The proposed framework, as well as its centralized version DeepQP, significantly outperform their standard optimization counterparts on a variety of tasks such as randomly generated problems, optimal control, linear regression, transportation networks and others. Notably, DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy. Moreover, it achieves orders-of-magnitude improvements in wall-clock time compared to OSQP. The certifiable performance guarantees of our approach are also demonstrated, ensuring higher-quality solutions over traditional optimizers.
Problem

Research questions and friction points this paper is trying to address.

Large-scale Optimization
Quadratic Programming
Efficiency Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeepDistributedQP
PAC-Bayes Theory
Large-Scale Quadratic Programming
🔎 Similar Papers
No similar papers found.
Augustinos D. Saravanos
Augustinos D. Saravanos
Postdoctoral Researcher, Massachusetts Institute of Technology
OptimizationMachine LearningControl TheoryMulti-Agent SystemsLarge-Scale Decision-Making
H
Hunter Kuperman
Autonomous Control and Decision Systems Laboratory, Georgia Institute of Technology
Alex Oshin
Alex Oshin
Autonomous Control and Decision Systems Laboratory, Georgia Institute of Technology
A
Arshiya Taj Abdul
Autonomous Control and Decision Systems Laboratory, Georgia Institute of Technology
Vincent Pacelli
Vincent Pacelli
Postdoc, School of Aerospace Engineering, Georgia Tech
RoboticsStochastic Optimal ControlInformation Theory
E
Evangelos A. Theodorou
Autonomous Control and Decision Systems Laboratory, Georgia Institute of Technology