A fast algorithm to minimize prediction loss of the optimal solution in inverse optimization problem of MILP

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In inverse optimization for mixed-integer linear programming (MILP), existing methods suffer from slow convergence in estimating objective function weights, with error bounds limited to (O(k^{-1/(d-1)})). Method: This paper proposes a projected subgradient method based on suboptimality loss—a novel formulation that integrates suboptimality-loss modeling, projected subgradient optimization, and asymptotic convergence analysis, seamlessly embedded within standard MILP solvers. Contribution/Results: To our knowledge, this is the first method achieving superpolynomial (exponentially bounded) convergence in MILP inverse optimization: the weight estimation error decays superpolynomially with iteration count (k), and exact recovery of the optimal weights is guaranteed within a finite number of iterations. Experiments demonstrate that the method requires fewer than one-seventh the number of MILP solver calls compared to state-of-the-art approaches, while ensuring finite-step convergence—substantially enhancing both computational efficiency and practical applicability.

Technology Category

Application Category

📝 Abstract
We consider the inverse optimization problem of estimating the weights of the objective function such that the given solution is an optimal solution for a mixed integer linear program (MILP). In this inverse optimization problem, the known methods exhibit inefficient convergence. Specifically, if $d$ denotes the dimension of the weights and $k$ the number of iterations, then the error of the weights is bounded by $O(k^{-1/(d-1)})$, leading to slow convergence as $d$ increases.We propose a projected subgradient method with a step size of $k^{-1/2}$ based on suboptimality loss. We theoretically show and demonstrate that the proposed method efficiently learns the weights. In particular, we show that there exists a constant $gamma>0$ such that the distance between the learned and true weights is bounded by $ Oleft(k^{-1/(1+gamma)} expleft(-frac{gamma k^{1/2}}{2+gamma} ight) ight), $ or the optimal solution is exactly recovered. Furthermore, experiments demonstrate that the proposed method solves the inverse optimization problems of MILP using fewer than $1/7$ the number of MILP calls required by known methods, and converges within a finite number of iterations.
Problem

Research questions and friction points this paper is trying to address.

Improving slow convergence in MILP inverse optimization problems
Estimating objective weights for optimal MILP solutions efficiently
Reducing MILP calls and iterations for weight learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Projected subgradient method with step size
Efficient weight learning via suboptimality loss
Reduced MILP calls for faster convergence
A
Akira Kitaoka
NEC Corporation