Globalized Adversarial Regret Optimization: Robust Decisions with Uncalibrated Predictions

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses optimization problems reliant on machine learning predictions when reliable error bounds are unavailable, a setting where traditional robust and regret-based approaches struggle to deliver effective performance guarantees. The paper introduces the Global Adversarial Regret Optimization (GARO) framework, which generalizes the notion of adversarial regret globally and provides unified absolute or relative performance guarantees for uncertainties of arbitrary magnitude—without requiring probabilistic calibration of uncertainty sets. By extending Lepski’s adaptive method to downstream decision-making and leveraging affine worst-case cost functions with polyhedral norm-based uncertainty sets, GARO is exactly reformulated into a tractable optimization problem, accompanied by a constraint generation algorithm with convergence guarantees. Empirical results demonstrate that GARO achieves a superior trade-off between worst-case and average out-of-sample performance while offering stronger global assurances.
📝 Abstract
Optimization problems routinely depend on uncertain parameters that must be predicted before a decision is made. Classical robust and regret formulations are designed to handle erroneous predictions and can provide statistical error bounds in simple settings. However, when predictions lack rigorous error bounds (as is typical of modern machine learning methods) classical robust models often yield vacuous guarantees, while regret formulations can paradoxically produce decisions that are more optimistic than even a nominal solution. We introduce Globalized Adversarial Regret Optimization (GARO), a decision framework that controls adversarial regret, defined as the gap between the worst-case cost and the oracle robust cost, uniformly across all possible uncertainty set sizes. By design, GARO delivers absolute or relative performance guarantees against an oracle with full knowledge of the prediction error, without requiring any probabilistic calibration of the uncertainty set. We show that GARO equipped with a relative rate function generalizes the classical adaptation method of Lepski to downstream decision problems. We derive exact tractable reformulations for problems with affine worst-case cost functions and polyhedral norm uncertainty sets, and provide a discretization and a constraint-generation algorithm with convergence guarantees for general settings. Finally, experiments demonstrate that GARO yields solutions with a more favorable trade-off between worst-case and mean out-of-sample performance, as well as stronger global performance guarantees.
Problem

Research questions and friction points this paper is trying to address.

uncalibrated predictions
robust optimization
adversarial regret
uncertainty quantification
decision-making under uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Globalized Adversarial Regret Optimization
uncalibrated predictions
adversarial regret
robust decision-making
oracle performance guarantee
🔎 Similar Papers
No similar papers found.