Stochastic set-valued optimization and its application to robust learning

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited robustness of traditional empirical risk minimization under distributional shift and its inability to adequately capture tail behavior of the loss distribution. The authors propose a novel stochastic set-valued optimization framework based on hyper-box sets, wherein decision variables are mapped to hyper-boxes and the problem is reformulated as a multi-objective optimization. A key innovation lies in jointly modeling the lower and upper tails of the loss distribution via sub-quantiles and super-quantiles. The resulting formulation is solved using a stochastic multi-gradient algorithm, coupled with a Pareto knee-point selection strategy. This approach significantly enhances model robustness and test-time stability under distributional shifts while maintaining accuracy comparable to that of empirical risk minimization.

Technology Category

Application Category

📝 Abstract
In this paper, we develop a stochastic set-valued optimization (SVO) framework tailored for robust machine learning. In the SVO setting, each decision variable is mapped to a set of objective values, and optimality is defined via set relations. We focus on SVO problems with hyperbox sets, which can be reformulated as multi-objective optimization (MOO) problems with finitely many objectives and serve as a foundation for representing or approximating more general mapped sets. Two special cases of hyperbox-valued optimization (HVO) are interval-valued (IVO) and rectangle-valued (RVO) optimization. We construct stochastic IVO/RVO formulations that incorporate subquantiles and superquantiles into the objective functions of the MOO reformulations, providing a new characterization for subquantiles. These formulations provide interpretable trade-offs by capturing both lower- and upper-tail behaviors of loss distributions, thereby going beyond standard empirical risk minimization and classical robust models. To solve the resulting multi-objective problems, we adopt stochastic multi-gradient algorithms and select a Pareto knee solution. In numerical experiments, the proposed algorithms with this selection strategy exhibit improved robustness and reduced variability across test replications under distributional shift compared with empirical risk minimization, while maintaining competitive accuracy.
Problem

Research questions and friction points this paper is trying to address.

robust learning
distributional shift
loss distribution tails
stochastic set-valued optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

stochastic set-valued optimization
hyperbox-valued optimization
subquantile
multi-objective optimization
Pareto knee solution
T
Tommaso Giovannelli
Department of Mechanical and Materials Engineering, University of Cincinnati, Cincinnati, OH 45221, USA
J
Jingfu Tan
Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18015-1582, USA
Luis Nunes Vicente
Luis Nunes Vicente
Lehigh University
OptimizationApplied MathematicsOperations Research