🤖 AI Summary
This paper addresses the quantile crossing problem—non-monotonicity of estimated quantile curves across quantile levels—in quantile regression. We propose a smoothed stochastic gradient descent (SGD) algorithm that enforces strict monotonicity by smoothing the score function. Theoretically, we establish the first non-asymptotic tail probability bound with explicit convergence rate and a uniform Bahadur representation for quantile estimators; further, we develop a uniform Gaussian approximation with Polyak–Ruppert averaging, breaking free from classical asymptotic frameworks. Methodologically, our approach unifies monotonicity guarantees with non-asymptotic statistical characterization. Numerical experiments demonstrate that the algorithm eliminates quantile crossing in finite samples, yields controllable estimation error, and exhibits robust performance. Altogether, it provides a new paradigm for online quantile learning that is both theoretically rigorous and computationally feasible.
📝 Abstract
This paper considers the estimation of quantiles via a smoothed version of the stochastic gradient descent (SGD) algorithm. By smoothing the score function in the conventional SGD quantile algorithm, we achieve monotonicity in the quantile level in that the estimated quantile curves do not cross. We derive non-asymptotic tail probability bounds for the smoothed SGD quantile estimate both for the case with and without Polyak-Ruppert averaging. For the latter, we also provide a uniform Bahadur representation and a resulting Gaussian approximation result. Numerical studies show good finite sample behavior for our theoretical results.