Distributional bias compromises leave-one-out cross-validation

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a distributional bias induced by leave-one-out cross-validation (LOO-CV) in small-sample settings: the mean of the training set—excluding each held-out sample—is systematically negatively correlated with that sample’s label, leading to distorted model evaluation, particularly under strong regularization, where performance is systematically underestimated. To address this, the paper formally defines and quantifies the bias for the first time, and proposes ReBalanced CV—a scalable, reweighting-based cross-validation framework that calibrates training-set distribution via importance-weighted resampling. Theoretical analysis and extensive experiments on synthetic and real-world datasets—spanning logistic regression, random forests, and neural networks, and evaluating AUC-ROC and AUC-PR—demonstrate that ReBalanced CV significantly improves the accuracy of LOO-CV performance estimates, mitigates regularization bias in hyperparameter optimization, and enhances selection robustness.

Technology Category

Application Category

📝 Abstract
Cross-validation is a common method for estimating the predictive performance of machine learning models. In a data-scarce regime, where one typically wishes to maximize the number of instances used for training the model, an approach called ‘leave-one-out cross-validation’ is often used. In this design, a separate model is built for predicting each data instance after training on all other instances. Since this results in a single test data point available per model trained, predictions are aggregated across the entire dataset to calculate common rank-based performance metrics such as the area under the receiver operating characteristic or precision-recall curves. In this work, we demonstrate that this approach creates a negative correlation between the average label of each training fold and the label of its corresponding test instance, a phenomenon that we term distributional bias. As machine learning models tend to regress to the mean of their training data, this distributional bias tends to negatively impact performance evaluation and hyperparameter optimization. We show that this effect generalizes to leave-P-out cross-validation and persists across a wide range of modeling and evaluation approaches, and that it can lead to a bias against stronger regularization. To address this, we propose a generalizable rebalanced cross-validation approach that corrects for distributional bias. We demonstrate that our approach improves cross-validation performance evaluation in synthetic simulations and in several published leave-one-out analyses.
Problem

Research questions and friction points this paper is trying to address.

Distributional bias affects leave-one-out cross-validation accuracy
Negative correlation between training and test labels skews evaluation
Proposed rebalanced cross-validation corrects bias in classification and regression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rebalanced cross-validation corrects distributional bias
Addresses negative correlation in leave-one-out validation
Improves performance evaluation and hyperparameter optimization
🔎 Similar Papers
No similar papers found.
G
George I. Austin
Department of Biomedical Informatics, Columbia University Irving Medical Center, New York, NY, USA; Program for Mathematical Genomics, Department of Systems Biology, Columbia University Irving Medical Center, New York, NY, USA
I
I. Pe’er
Program for Mathematical Genomics, Department of Systems Biology, Columbia University Irving Medical Center, New York, NY, USA; Department of Computer Science, Columbia University, New York, NY, USA
T
T. Korem
Program for Mathematical Genomics, Department of Systems Biology, Columbia University Irving Medical Center, New York, NY, USA; Department of Obstetrics and Gynecology, Columbia University Irving Medical Center, New York, NY, USA