Constrained Online Convex Optimization with Polyak Feasibility Steps

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies online convex optimization with a fixed convex constraint $g(x) leq 0$, where only the function value $g(x_t)$ and a subgradient $partial g(x_t)$ are available at each round $t$. Existing methods guarantee only cumulative constraint satisfaction ($sum_{t=1}^T g(x_t) leq 0$) and $O(sqrt{T})$ regret, but fail to ensure per-round feasibility. We propose, for the first time under identical information assumptions, a feasibility-driven two-stage update framework based on Polyak step sizes: it augments online gradient descent with a constraint-guided subgradient correction. This ensures strict per-round feasibility ($g(x_t) leq 0, forall t$) while achieving $O(sqrt{T})$ regret. Our theoretical analysis is rigorous and complete. Empirical results demonstrate that the proposed method significantly outperforms baseline approaches in both feasibility enforcement and convergence stability.

Technology Category

Application Category

📝 Abstract
In this work, we study online convex optimization with a fixed constraint function $g : mathbb{R}^d ightarrow mathbb{R}$. Prior work on this problem has shown $O(sqrt{T})$ regret and cumulative constraint satisfaction $sum_{t=1}^{T} g(x_t) leq 0$, while only accessing the constraint value and subgradient at the played actions $g(x_t), partial g(x_t)$. Using the same constraint information, we show a stronger guarantee of anytime constraint satisfaction $g(x_t) leq 0 forall t in [T]$, and matching $O(sqrt{T})$ regret guarantees. These contributions are thanks to our approach of using Polyak feasibility steps to ensure constraint satisfaction, without sacrificing regret. Specifically, after each step of online gradient descent, our algorithm applies a subgradient descent step on the constraint function where the step-size is chosen according to the celebrated Polyak step-size. We further validate this approach with numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Online convex optimization with fixed constraints
Ensuring anytime constraint satisfaction
Achieving O(√T) regret guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polyak feasibility steps
ensuring anytime constraint satisfaction
matching O(√T) regret guarantees
🔎 Similar Papers
No similar papers found.