🤖 AI Summary
This work proposes a differentiable programming–based framework for learning adaptive optimization algorithms to address the slow convergence and high per-iteration cost of traditional first-order methods in large-scale optimization. By embedding Fenchel–Rockafellar duality theory into automatic differentiation systems, the framework enables end-to-end training and adaptive refinement of duality-driven iterative schemes such as ADMM and PDHG. Implemented uniformly across major deep learning frameworks—including PyTorch, TensorFlow, and JAX—the approach significantly improves both computational efficiency and solution quality on a range of tasks, including linear programming, optimal power flow (OPF), Laplacian regularization, and neural network verification.
📝 Abstract
Solving massive-scale optimization problems requires scalable first-order methods with low per-iteration cost. This tutorial highlights a shift in optimization: using differentiable programming not only to execute algorithms but to learn how to design them. Modern frameworks such as PyTorch, TensorFlow, and JAX enable this paradigm through efficient automatic differentiation. Embedding first-order methods within these systems allows end-to-end training that improves convergence and solution quality. Guided by Fenchel-Rockafellar duality, the tutorial demonstrates how duality-informed iterative schemes such as ADMM and PDHG can be learned and adapted. Case studies across LP, OPF, Laplacian regularization, and neural network verification illustrate these gains.