๐ค AI Summary
This work addresses the lack of rigorous statistical inference for probabilistic calibration of machine learning models. Specifically, it proposes the first asymptotically valid, non-negativity-preserving confidence interval for the โโ expected calibration error (ECE), applicable to top-1-to-k calibrationโincluding confidence calibration and full calibration. Methodologically, it introduces a debiased ECE estimator, theoretically characterizing its distinct convergence rates and asymptotic variances under calibrated versus miscalibrated models; leverages asymptotic normality for bias correction; and designs a confidence interval construction strategy that jointly ensures statistical validity and non-negativity. Experiments demonstrate that the proposed intervals achieve accurate coverage and substantially shorter widths than resampling-based alternatives. This work provides the first statistically rigorous, computationally efficient, and trustworthy tool for quantifying model calibration performance.
๐ Abstract
Recent advances in machine learning have significantly improved prediction accuracy in various applications. However, ensuring the calibration of probabilistic predictions remains a significant challenge. Despite efforts to enhance model calibration, the rigorous statistical evaluation of model calibration remains less explored. In this work, we develop confidence intervals the $ell_2$ Expected Calibration Error (ECE). We consider top-1-to-$k$ calibration, which includes both the popular notion of confidence calibration as well as full calibration. For a debiased estimator of the ECE, we show asymptotic normality, but with different convergence rates and asymptotic variances for calibrated and miscalibrated models. We develop methods to construct asymptotically valid confidence intervals for the ECE, accounting for this behavior as well as non-negativity. Our theoretical findings are supported through extensive experiments, showing that our methods produce valid confidence intervals with shorter lengths compared to those obtained by resampling-based methods.