🤖 AI Summary
Large language models (LLMs) frequently generate factually incorrect statements—termed “hallucinations”—with unwarranted confidence, undermining output reliability. We identify that “linguistic uncertainty” in model outputs is predominantly encoded in representation space as a single, localizable linear neural feature; critically, this feature exhibits systematic misalignment with true semantic uncertainty—and the degree of this misalignment is a stronger predictor of hallucination than semantic uncertainty itself. Building on this finding, we propose a novel, inference-time uncertainty calibration paradigm: it enables dynamic, feature-level interventions—including logit correction and uncertainty reweighting—to realign linguistic and semantic uncertainty. Evaluated on short-answer generation tasks, our method reduces relative hallucination rates by 32% on average, significantly improving output veracity and user trust.
📝 Abstract
LLMs often adopt an assertive language style also when making false claims. Such ``overconfident hallucinations'' mislead users and erode trust. Achieving the ability to express in language the actual degree of uncertainty around a claim is therefore of great importance. We find that ``verbal uncertainty'' is governed by a single linear feature in the representation space of LLMs, and show that this has only moderate correlation with the actual ``semantic uncertainty'' of the model. We apply this insight and show that (1) the mismatch between semantic and verbal uncertainty is a better predictor of hallucinations than semantic uncertainty alone and (2) we can intervene on verbal uncertainty at inference time and reduce hallucinations on short-form answers, achieving an average relative reduction of 32%.