Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current language models suffer from miscalibrated semantic uncertainty estimates and weak discriminative capacity in question-answering tasks due to their reliance on fixed-temperature heuristics. This work proposes a token-level temperature scaling strategy based on a single scalar temperature to post-process and recalibrate model output probabilities. It is the first to explicitly leverage temperature scaling to enhance both the calibration and discriminativeness of semantic uncertainty, demonstrating that even a simple scalar temperature encodes effective inductive bias. Experimental results across multiple question-answering benchmarks show that the proposed method consistently outperforms heuristic baselines and more complex token-level recalibration approaches, yielding significant improvements in calibration accuracy, discriminative performance, and the quality of downstream entropy-based uncertainty estimates.
📝 Abstract
Calibration is central to reliable semantic uncertainty quantification, yet prior work has largely focused on discrimination, neglecting calibration. As calibration and discrimination capture distinct aspects of uncertainty, focusing on discrimination alone yields an incomplete picture. We address this gap by systematically evaluating both aspects across a broad set of confidence measures. We show that current approaches, particularly fixed-temperature heuristics, produce systematically miscalibrated and poorly discriminative semantic confidence distributions. We demonstrate that optimising a single scalar temperature, which, we argue, provides a suitable inductive bias, is a surprisingly simple yet effective solution. Our exhaustive evaluation confirms that temperature scaling consistently improves semantic calibration, discrimination, and downstream entropy, outperforming both heuristic baselines and more expressive token-level recalibration methods on question-answering tasks.
Problem

Research questions and friction points this paper is trying to address.

semantic uncertainty
calibration
discrimination
language models
question-answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

temperature scaling
semantic uncertainty
calibration
discrimination
language models
🔎 Similar Papers
No similar papers found.