A Confidence Interval for the โ„“2 Expected Calibration Error

๐Ÿ“… 2024-08-16
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the lack of rigorous statistical inference for probabilistic calibration of machine learning models. Specifically, it proposes the first asymptotically valid, non-negativity-preserving confidence interval for the โ„“โ‚‚ expected calibration error (ECE), applicable to top-1-to-k calibrationโ€”including confidence calibration and full calibration. Methodologically, it introduces a debiased ECE estimator, theoretically characterizing its distinct convergence rates and asymptotic variances under calibrated versus miscalibrated models; leverages asymptotic normality for bias correction; and designs a confidence interval construction strategy that jointly ensures statistical validity and non-negativity. Experiments demonstrate that the proposed intervals achieve accurate coverage and substantially shorter widths than resampling-based alternatives. This work provides the first statistically rigorous, computationally efficient, and trustworthy tool for quantifying model calibration performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in machine learning have significantly improved prediction accuracy in various applications. However, ensuring the calibration of probabilistic predictions remains a significant challenge. Despite efforts to enhance model calibration, the rigorous statistical evaluation of model calibration remains less explored. In this work, we develop confidence intervals the $ell_2$ Expected Calibration Error (ECE). We consider top-1-to-$k$ calibration, which includes both the popular notion of confidence calibration as well as full calibration. For a debiased estimator of the ECE, we show asymptotic normality, but with different convergence rates and asymptotic variances for calibrated and miscalibrated models. We develop methods to construct asymptotically valid confidence intervals for the ECE, accounting for this behavior as well as non-negativity. Our theoretical findings are supported through extensive experiments, showing that our methods produce valid confidence intervals with shorter lengths compared to those obtained by resampling-based methods.
Problem

Research questions and friction points this paper is trying to address.

Develop confidence intervals for Expected Calibration Error
Address calibration evaluation in probabilistic predictions
Compare convergence rates for calibrated and miscalibrated models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops confidence intervals for $ell_2$ ECE
Debiased estimator with asymptotic normality
Shorter valid intervals than resampling methods
๐Ÿ”Ž Similar Papers
No similar papers found.