π€ AI Summary
Evaluating uncertainty quantification (UQ) methods in deep learning classification remains challenging due to the absence of ground-truth uncertainty labels, hindering objective, rigorous assessment.
Method: We propose a novel, theory-grounded, no-ground-truth evaluation framework that quantifies UQ quality solely from standard supervised test setsβwithout requiring any uncertainty annotations. Leveraging theoretical analysis, we rigorously establish that widely used confidence-ranking metrics (e.g., AUROC, AUPR) implicitly measure the statistical consistency of relative trustworthiness ordering among predictions.
Contribution/Results: Our framework provides the first formal theoretical foundation and interpretability guarantee for such ranking-based metrics, enabling plug-and-play, theoretically valid quantification of uncertainty calibration and ranking fidelity. It bridges a critical gap between theoretical UQ evaluation and practical industrial deployment, offering both mathematical rigor and empirical usability across diverse models and datasets.
π Abstract
Despite the increasing demand for safer machine learning practices, the use of Uncertainty Quantification (UQ) methods in production remains limited. This limitation is exacerbated by the challenge of validating UQ methods in absence of UQ ground truth. In classification tasks, when only a usual set of test data is at hand, several authors suggested different metrics that can be computed from such test points while assessing the quality of quantified uncertainties. This paper investigates such metrics and proves that they are theoretically well-behaved and actually tied to some uncertainty ground truth which is easily interpretable in terms of model prediction trustworthiness ranking. Equipped with those new results, and given the applicability of those metrics in the usual supervised paradigm, we argue that our contributions will help promoting a broader use of UQ in deep learning.