🤖 AI Summary
This work addresses efficient solving of parametric bilevel optimization problems with coupling constraints. Conventional optimizers struggle with such strongly coupled, nonconvex structures. We propose the first end-to-end differentiable framework: it implicitly embeds the lower-level optimal solution into upper-level gradient computation via the implicit function theorem and a differentiable interior-point method, enabling neural networks to directly learn the mapping from parameters to optimal solutions. Our approach eliminates reliance on restrictive lower-level assumptions—such as strong convexity—or approximate gradients prevalent in existing bilevel learning methods, and is the first to support general bilevel structures with coupled inequality constraints. Evaluated on synthetic benchmarks and a control-system co-design task, our method achieves 10–100× speedup over standard solvers while attaining solution accuracy nearly identical to that of exact optimal solutions.
📝 Abstract
Learning to Optimize (L2O) is a subfield of machine learning (ML) in which ML models are trained to solve parametric optimization problems. The general goal is to learn a fast approximator of solutions to constrained optimization problems, as a function of their defining parameters. Prior L2O methods focus almost entirely on single-level programs, in contrast to the bilevel programs, whose constraints are themselves expressed in terms of optimization subproblems. Bilevel programs have numerous important use cases but are notoriously difficult to solve, particularly under stringent time demands. This paper proposes a framework for learning to solve a broad class of challenging bilevel optimization problems, by leveraging modern techniques for differentiation through optimization problems. The framework is illustrated on an array of synthetic bilevel programs, as well as challenging control system co-design problems, showing how neural networks can be trained as efficient approximators of parametric bilevel optimization.