🤖 AI Summary
This paper identifies a distributional bias induced by leave-one-out cross-validation (LOO-CV) in small-sample settings: the mean of the training set—excluding each held-out sample—is systematically negatively correlated with that sample’s label, leading to distorted model evaluation, particularly under strong regularization, where performance is systematically underestimated. To address this, the paper formally defines and quantifies the bias for the first time, and proposes ReBalanced CV—a scalable, reweighting-based cross-validation framework that calibrates training-set distribution via importance-weighted resampling. Theoretical analysis and extensive experiments on synthetic and real-world datasets—spanning logistic regression, random forests, and neural networks, and evaluating AUC-ROC and AUC-PR—demonstrate that ReBalanced CV significantly improves the accuracy of LOO-CV performance estimates, mitigates regularization bias in hyperparameter optimization, and enhances selection robustness.
📝 Abstract
Cross-validation is a common method for estimating the predictive performance of machine learning models. In a data-scarce regime, where one typically wishes to maximize the number of instances used for training the model, an approach called ‘leave-one-out cross-validation’ is often used. In this design, a separate model is built for predicting each data instance after training on all other instances. Since this results in a single test data point available per model trained, predictions are aggregated across the entire dataset to calculate common rank-based performance metrics such as the area under the receiver operating characteristic or precision-recall curves. In this work, we demonstrate that this approach creates a negative correlation between the average label of each training fold and the label of its corresponding test instance, a phenomenon that we term distributional bias. As machine learning models tend to regress to the mean of their training data, this distributional bias tends to negatively impact performance evaluation and hyperparameter optimization. We show that this effect generalizes to leave-P-out cross-validation and persists across a wide range of modeling and evaluation approaches, and that it can lead to a bias against stronger regularization. To address this, we propose a generalizable rebalanced cross-validation approach that corrects for distributional bias. We demonstrate that our approach improves cross-validation performance evaluation in synthetic simulations and in several published leave-one-out analyses.