PAC-Bayes Meets Online Contextual Optimization

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing online contextual optimization methods are predominantly frequentist, relying on gradient-based updates and deterministic predictors, resulting in high prediction variance and difficulty handling non-differentiable problems. Method: This paper proposes the first PAC-Bayesian framework for online contextual optimization, adopting a predict-then-optimize paradigm. It employs gradient-free stochastic optimization via the Gibbs posterior, supporting non-convex and non-smooth loss functions. The approach integrates Bayesian updating, sequential Monte Carlo sampling, and PAC-Bayes analysis. Contribution/Results: Under bounded mixability of losses, the method achieves an $O(sqrt{T})$ regret bound. Theoretically and empirically, it significantly reduces prediction variance compared to conventional approaches, while enhancing robustness and generalization in dynamic environments.

Technology Category

Application Category

📝 Abstract
The predict-then-optimize paradigm bridges online learning and contextual optimization in dynamic environments. Previous works have investigated the sequential updating of predictors using feedback from downstream decisions to minimize regret in the full-information settings. However, existing approaches are predominantly frequentist, rely heavily on gradient-based strategies, and employ deterministic predictors that could yield high variance in practice despite their asymptotic guarantees. This work introduces, to the best of our knowledge, the first Bayesian online contextual optimization framework. Grounded in PAC-Bayes theory and general Bayesian updating principles, our framework achieves $mathcal{O}(sqrt{T})$ regret for bounded and mixable losses via a Gibbs posterior, eliminates the dependence on gradients through sequential Monte Carlo samplers, and thereby accommodates nondifferentiable problems. Theoretical developments and numerical experiments substantiate our claims.
Problem

Research questions and friction points this paper is trying to address.

Develops Bayesian online contextual optimization framework using PAC-Bayes theory
Addresses limitations of deterministic predictors with high variance
Eliminates gradient dependence through sequential Monte Carlo samplers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian online contextual optimization framework
PAC-Bayes theory with Gibbs posterior
Sequential Monte Carlo samplers replace gradients
🔎 Similar Papers
No similar papers found.