🤖 AI Summary
This study addresses the lack of systematic, large-scale empirical evaluation of confidence interval (CI) methods for generalization error estimation. We conduct the first ultra-large-scale benchmark, evaluating 13 CI construction methods across 19 tabular regression and classification tasks, 7 learner families (e.g., RF, XGBoost), and 8 loss functions. Our method introduces a unified three-dimensional evaluation framework—covering coverage frequency, interval width, and runtime—integrating cross-validation, bootstrapping, and diverse variance estimation techniques. Results reveal fundamental trade-offs among accuracy, robustness, and computational efficiency, identify several cross-scenario recommended methods, and expose consistent failures of existing approaches under high-dimensional sparse or small-sample regimes. All experimental data (sourced from OpenML), code, and complete logs are publicly released, establishing the first reproducible, comprehensive benchmark platform for generalization error CIs—thereby advancing trustworthy machine learning.
📝 Abstract
When assessing the quality of prediction models in machine learning, confidence intervals (CIs) for the generalization error, which measures predictive performance, are a crucial tool. Luckily, there exist many methods for computing such CIs and new promising approaches are continuously being proposed. Typically, these methods combine various resampling procedures, most popular among them cross-validation and bootstrapping, with different variance estimation techniques. Unfortunately, however, there is currently no consensus on when any of these combinations may be most reliably employed and how they generally compare. In this work, we conduct a large-scale study comparing CIs for the generalization error, the first one of such size, where we empirically evaluate 13 different CI methods on a total of 19 tabular regression and classification problems, using seven different inducers and a total of eight loss functions. We give an overview of the methodological foundations and inherent challenges of constructing CIs for the generalization error and provide a concise review of all 13 methods in a unified framework. Finally, the CI methods are evaluated in terms of their relative coverage frequency, width, and runtime. Based on these findings, we can identify a subset of methods that we would recommend. We also publish the datasets as a benchmarking suite on OpenML and our code on GitHub to serve as a basis for further studies.