🤖 AI Summary
This paper addresses the trade-off in modeling disjunctive constraints in mixed-integer programming (MIP): the big-M formulation yields weak relaxations, while the convex-hull formulation is computationally expensive. We propose the *P*-split method—a novel reformulation paradigm situated between these extremes—by performing dimensional lifting and piecewise convex-hull construction over additively separable convex functions. Crucially, *P*-split establishes the first hierarchical family of formulations whose relaxation strength monotonically improves with the split parameter *P*, enabling progressive convergence from the big-M relaxation to the convex hull. The method unifies modeling for both convex and nonconvex disjunctive terms, extending beyond classical applicability limits. Evaluated on 344 benchmark instances—including K-means clustering, semi-supervised clustering, P_ball problems, and ReLU neural network optimization—*P*-split achieves node counts comparable to the convex hull while reducing solution time by an order of magnitude, significantly outperforming big-M.
📝 Abstract
We develop a class of mixed-integer formulations for disjunctive constraints intermediate to the big-M and convex hull formulations in terms of relaxation strength. The main idea is to capture the best of both the big-M and convex hull formulations: a computationally light formulation with a tight relaxation. The"P-split"formulations are based on a lifted transformation that splits convex additively separable constraints into P partitions and forms the convex hull of the linearized and partitioned disjunction. The"P-split"formulations are derived for disjunctive constraints with convex constraints within each disjunct, and we generalize the results for the case with nonconvex constraints within the disjuncts. We analyze the continuous relaxation of the P-split formulations and show that, under certain assumptions, the formulations form a hierarchy starting from a big-M equivalent and converging to the convex hull. We computationally compare the P-split formulations against big-M and convex hull formulations on 344 test instances. The test problems include K-means clustering, semi-supervised clustering, P_ball problems, and optimization over trained ReLU neural networks. The computational results show promising potential of the P-split formulations. For many of the test problems, P-split formulations are solved with a similar number of explored nodes as the convex hull formulation, while reducing the solution time by an order of magnitude and outperforming big-M both in time and number of explored nodes.