π€ AI Summary
This work addresses the severe underestimation of out-of-sample risk by generalized cross-validation (GCV) in high-dimensional ridge regression, caused by sample correlations. We establish sharp asymptotic theory for training and test risks under arbitrary covariance structures. We propose CorrGCVβa corrected estimator that yields unbiased, computationally tractable, and concentration-convergent risk estimates when noise and data share the same correlation structure. Crucially, this is the first extension of GCV to time-series forecasting settings where test points are correlated with the training set. Leveraging random matrix theory and high-dimensional asymptotics, our theoretical predictions align closely with empirical results across diverse real-world high-dimensional datasets. Experiments demonstrate that CorrGCV significantly outperforms standard GCV, accurately characterizing and correcting the persistent optimistic bias in long-term risk estimation induced by sample correlation.
π Abstract
Recent years have seen substantial advances in our understanding of high-dimensional ridge regression, but existing theories assume that training examples are independent. By leveraging techniques from random matrix theory and free probability, we provide sharp asymptotics for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations. We demonstrate that in this setting, the generalized cross validation estimator (GCV) fails to correctly predict the out-of-sample risk. However, in the case where the noise residuals have the same correlations as the data points, one can modify the GCV to yield an efficiently-computable unbiased estimator that concentrates in the high-dimensional limit, which we dub CorrGCV. We further extend our asymptotic analysis to the case where the test point has nontrivial correlations with the training set, a setting often encountered in time series forecasting. Assuming knowledge of the correlation structure of the time series, this again yields an extension of the GCV estimator, and sharply characterizes the degree to which such test points yield an overly optimistic prediction of long-time risk. We validate the predictions of our theory across a variety of high dimensional data.