First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses stochastic and finite-sum convex optimization problems subject to deterministic constraints. Conventional approaches seek ε-expected feasible solutions, where constraint violations are controlled only in expectation—failing to guarantee “almost-sure feasibility,” a critical requirement in practice. To bridge this gap, we introduce the notion of an ε-certifiably feasible stochastic optimal solution: a solution whose constraint violation is bounded by ε with probability one, while its expected optimality gap is at most ε. Methodologically, we embed deterministic constraints into a quadratic penalty framework and propose a single-loop algorithm combining accelerated stochastic gradient (ASG) updates with an enhanced variance-reduction scheme to efficiently solve the penalized subproblems. Theoretically, we establish the first first-order oracle complexity bound for computing ε-certifiably feasible solutions and derive the corresponding complexity for sample-average approximation, achieving significant improvements in both constraint reliability and computational efficiency.

Technology Category

Application Category

📝 Abstract
In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an $ε$-$expectedly feasible stochastic optimal$ solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance $ε$. However, in many practical applications, constraints must be nearly satisfied with certainty, rendering such solutions potentially unsuitable due to the risk of substantial violations. To address this issue, we propose stochastic first-order methods for finding an $ε$-$surely feasible stochastic optimal$ ($ε$-SFSO) solution, where the constraint violation is deterministically bounded by $ε$ and the expected optimality gap is at most $ε$. Our methods apply an accelerated stochastic gradient (ASG) scheme or a modified variance-reduced ASG scheme $only once$ to a sequence of quadratic penalty subproblems with appropriately chosen penalty parameters. We establish first-order oracle complexity bounds for the proposed methods in computing an $ε$-SFSO solution. As a byproduct, we also derive first-order oracle complexity results for sample average approximation method in computing an $ε$-SFSO solution of the stochastic optimization problem using our proposed methods to solve the sample average problem.
Problem

Research questions and friction points this paper is trying to address.

Solve stochastic convex optimization with deterministic constraints
Ensure nearly certain constraint satisfaction in optimization
Propose first-order methods for ε-surely feasible solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Accelerated stochastic gradient for deterministic constraints
Modified variance-reduced ASG scheme
Quadratic penalty subproblems with optimal parameters
🔎 Similar Papers
No similar papers found.
Zhaosong Lu
Zhaosong Lu
University of Minnesota
continuous optimizationmachine learningcomputational statistics
Y
Yifeng Xiao
Department of Industrial and Systems Engineering, University of Minnesota, USA