🤖 AI Summary
This paper addresses the problem of historical data window selection for statistical learning under nonstationary environments, aiming to balance data utilization efficiency against cumulative bias control. We propose a dynamic retrospective windowing mechanism grounded in stability principles, introducing—novelty—the notion of functional similarity measurement and a quasi-stationary segmentation technique to enable adaptive modeling of unknown nonstationarity. Through rigorous stability analysis, theoretical analysis under strongly convex and Lipschitz loss assumptions, and piecewise approximation, we establish a regret-theoretic framework and derive a minimax-optimal regret bound up to logarithmic factors. Numerical experiments demonstrate that the method exhibits strong robustness and adaptivity across diverse nonstationary patterns, including abrupt shifts, gradual drifts, and periodic variations.
📝 Abstract
We develop a versatile framework for statistical learning in non-stationary environments. In each time period, our approach applies a stability principle to select a look-back window that maximizes the utilization of historical data while keeping the cumulative bias within an acceptable range relative to the stochastic error. Our theory and numerical experiments showcase the adaptivity of this approach to unknown non-stationarity. We prove regret bounds that are minimax optimal up to logarithmic factors when the population losses are strongly convex, or Lipschitz only. At the heart of our analysis lie two novel components: a measure of similarity between functions and a segmentation technique for dividing the non-stationary data sequence into quasi-stationary pieces.