Monotone Optimisation with Learned Projections

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of solving monotonic optimization problems when the objective or constraint functions are accessible only as black-box oracles, rendering traditional solvers—such as Polyblock outer approximation—that rely on explicit functional forms inapplicable. To overcome this limitation, the authors propose the HM-RI network, which directly learns the radial inverse function to replace computationally expensive bisection searches. The architecture is structurally designed to inherently enforce monotonicity and homogeneity. Building upon a theoretical characterization of the radial inverse, they further introduce a relaxed monotonicity condition compatible with the Polyblock algorithm, substantially reducing training overhead. Experimental results demonstrate that the proposed method significantly accelerates convergence across multiple monotonic optimization benchmarks while maintaining solution quality, outperforming baseline approaches that disregard monotonic structure.

Technology Category

Application Category

📝 Abstract
Monotone optimisation problems admit specialised global solvers such as the Polyblock Outer Approximation (POA) algorithm, but these methods typically require explicit objective and constraint functions. In many applications, these functions are only available through data, making POA difficult to apply directly. We introduce an algorithm-aware learning approach that integrates learned models into POA by directly predicting its projection primitive via the radial inverse, avoiding the costly bisection procedure used in standard POA. We propose Homogeneous-Monotone Radial Inverse (HM-RI) networks, structured neural architectures that enforce key monotonicity and homogeneity properties, enabling fast projection estimation. We provide a theoretical characterisation of radial inverse functions and show that, under mild structural conditions, a HM-RI predictor corresponds to the radial inverse of a valid set of monotone constraints. To reduce training overhead, we further develop relaxed monotonicity conditions that remain compatible with POA. Across multiple monotone optimisation benchmarks (indefinite quadratic programming, multiplicative programming, and transmit power optimisation), our approach yields substantial speed-ups in comparison to direct function estimation while maintaining strong solution quality, outperforming baselines that do not exploit monotonic structure.
Problem

Research questions and friction points this paper is trying to address.

Monotone Optimisation
Polyblock Outer Approximation
Radial Inverse
Projection Primitive
Data-driven Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monotone Optimization
Radial Inverse
Algorithm-Aware Learning
Homogeneous-Monotone Networks
Polyblock Outer Approximation
🔎 Similar Papers
No similar papers found.