Learning to Optimize by Differentiable Programming

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a differentiable programming–based framework for learning adaptive optimization algorithms to address the slow convergence and high per-iteration cost of traditional first-order methods in large-scale optimization. By embedding Fenchel–Rockafellar duality theory into automatic differentiation systems, the framework enables end-to-end training and adaptive refinement of duality-driven iterative schemes such as ADMM and PDHG. Implemented uniformly across major deep learning frameworks—including PyTorch, TensorFlow, and JAX—the approach significantly improves both computational efficiency and solution quality on a range of tasks, including linear programming, optimal power flow (OPF), Laplacian regularization, and neural network verification.

Technology Category

Application Category

📝 Abstract
Solving massive-scale optimization problems requires scalable first-order methods with low per-iteration cost. This tutorial highlights a shift in optimization: using differentiable programming not only to execute algorithms but to learn how to design them. Modern frameworks such as PyTorch, TensorFlow, and JAX enable this paradigm through efficient automatic differentiation. Embedding first-order methods within these systems allows end-to-end training that improves convergence and solution quality. Guided by Fenchel-Rockafellar duality, the tutorial demonstrates how duality-informed iterative schemes such as ADMM and PDHG can be learned and adapted. Case studies across LP, OPF, Laplacian regularization, and neural network verification illustrate these gains.
Problem

Research questions and friction points this paper is trying to address.

optimization
first-order methods
differentiable programming
scalability
large-scale problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

differentiable programming
learned optimization
automatic differentiation
duality-informed algorithms
end-to-end training
🔎 Similar Papers
No similar papers found.
L
Liping Tao
Nanyang Technological University, Singapore
X
Xindi Tong
Nanyang Technological University, Singapore
Chee Wei Tan
Chee Wei Tan
Nanyang Technological University, Singapore
NetworksDistributed OptimizationGen AI