🤖 AI Summary
This work establishes high-probability regret bounds for empirical risk minimization (ERM) and extends them to learning problems involving nuisance components, such as causal inference, missing data, and domain adaptation. By employing a three-step approach—elementary inequality, localized uniform concentration bounds, and a fixed-point argument—combined with a key radius defined via local Rademacher complexity, the study characterizes convergence rates in a modular analytical framework. This framework unifies the treatment of standard and nuisance-augmented ERM, explicitly decomposing statistical and approximation errors, and provides sufficient conditions for fast convergence. It recovers classical rates for VC-subgraph classes, Sobolev/Hölder spaces, and bounded variation function classes, and delivers transferable regret guarantees for orthogonal learning settings.
📝 Abstract
This guide develops high-probability regret bounds for empirical risk minimization (ERM). The presentation is modular: we state broadly applicable guarantees under high-level conditions and give tools for verifying them for specific losses and function classes. We emphasize that many ERM rate derivations can be organized around a three-step recipe -- a basic inequality, a uniform local concentration bound, and a fixed-point argument -- which yields regret bounds in terms of a critical radius, defined via localized Rademacher complexity, under a mild Bernstein-type variance--risk condition. To make these bounds concrete, we upper bound the critical radius using local maximal inequalities and metric-entropy integrals, recovering familiar rates for VC-subgraph, Sobolev/Hölder, and bounded-variation classes.
We also review ERM with nuisance components -- including weighted ERM and Neyman-orthogonal losses -- as they arise in causal inference, missing data, and domain adaptation. Following the orthogonal learning framework, we highlight that these problems often admit regret-transfer bounds linking regret under an estimated loss to population regret under the target loss. These bounds typically decompose regret into (i) statistical error under the estimated (optimized) loss and (ii) approximation error due to nuisance estimation. Under sample splitting or cross-fitting, the first term can be controlled using standard fixed-loss ERM regret bounds, while the second term depends only on nuisance-estimation accuracy. We also treat the in-sample regime, where nuisances and the ERM are fit on the same data, deriving regret bounds and giving sufficient conditions for fast rates.