🤖 AI Summary
This paper addresses the sparse learning problem under data exhibiting both heavy-tailed distributions and local stationarity. We propose the first robust sparse learning framework tailored to heavy-tailed locally stationary processes. Methodologically, we establish the first non-asymptotic oracle inequality that unifies treatment of ℓ₁-norm and total variation regularizations, integrating least-squares loss with robust heavy-tailed estimation. Theoretically, our estimator achieves the optimal convergence rate, and statistical inference is rigorously supported by non-asymptotic concentration inequalities. Compared to existing approaches, the framework significantly enhances robustness against heavy-tailed noise, interpretability, and finite-sample performance. It provides a novel modeling tool for dynamic data with sharp peaks and heavy tails—arising, for instance, in finance and signal processing.
📝 Abstract
Sparsified Learning is ubiquitous in many machine learning tasks. It aims to regularize the objective function by adding a penalization term that considers the constraints made on the learned parameters. This paper considers the problem of learning heavy-tailed LSP. We develop a flexible and robust sparse learning framework capable of handling heavy-tailed data with locally stationary behavior and propose concentration inequalities. We further provide non-asymptotic oracle inequalities for different types of sparsity, including $ell_1$-norm and total variation penalization for the least square loss.