Multiclass Calibration Assessment and Recalibration of Probability Predictions via the Linear Log Odds Calibration Function

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to effectively and interpretably evaluate and recalibrate the probabilistic outputs of black-box multiclass models without internal access. This work proposes the Multiclass Linear Log-Odds (MCLLO) recalibration framework, which, for the first time, enables calibration assessment and adjustment using only predicted probabilities from a single model. By modeling calibration through a linear transformation in log-odds space and employing a likelihood ratio test for direct calibration evaluation, MCLLO achieves both interpretability and broad applicability. Experiments across three real-world domains—image classification, obesity analysis, and ecological modeling—demonstrate that MCLLO matches or outperforms four state-of-the-art recalibration methods in terms of calibration performance.

Technology Category

Application Category

📝 Abstract
Machine-generated probability predictions are essential in modern classification tasks such as image classification. A model is well calibrated when its predicted probabilities correspond to observed event frequencies. Despite the need for multicategory recalibration methods, existing methods are limited to (i) comparing calibration between two or more models rather than directly assessing the calibration of a single model, (ii) requiring under-the-hood model access, e.g., accessing logit-scale predictions within the layers of a neural network, and (iii) providing output which is difficult for human analysts to understand. To overcome (i)-(iii), we propose Multicategory Linear Log Odds (MCLLO) recalibration, which (i) includes a likelihood ratio hypothesis test to assess calibration, (ii) does not require under-the-hood access to models and is thus applicable on a wide range of classification problems, and (iii) can be easily interpreted. We demonstrate the effectiveness of the MCLLO method through simulations and three real-world case studies involving image classification via convolutional neural network, obesity analysis via random forest, and ecology via regression modeling. We compare MCLLO to four comparator recalibration techniques utilizing both our hypothesis test and the existing calibration metric Expected Calibration Error to show that our method works well alone and in concert with other methods.
Problem

Research questions and friction points this paper is trying to address.

multiclass calibration
probability recalibration
model calibration
calibration assessment
interpretable calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

multiclass calibration
recalibration
linear log odds
likelihood ratio test
model-agnostic
🔎 Similar Papers
No similar papers found.
A
Amy Vennos
Department of Statistics, Virginia Polytechnic Institute and State University
Xin Xing
Xin Xing
Virginia Tech
StatisticsNonparametric InferenceBioinformaticsMetagenomicsDeep Learning
C
Christopher T. Franck
Department of Statistics, Virginia Polytechnic Institute and State University