🤖 AI Summary
Large language models (LLMs) frequently exhibit hallucinations in safety-critical applications due to inaccurate uncertainty quantification (UQ), and existing UQ methods—relying primarily on semantic probabilities or pairwise distances—neglect intrinsic semantic structural information.
Method: We propose SeSE, the first structure-aware UQ framework for LLMs. SeSE constructs an adaptive sparse directed semantic graph over token representations and introduces a hierarchical abstraction mechanism to define Semantic Structure Entropy—a theoretically grounded metric capturing latent semantic uncertainty.
Contribution/Results: SeSE enables fine-grained and long-context UQ without requiring strong supervision. Evaluated across 29 model–dataset combinations, it significantly outperforms state-of-the-art baselines—including KLE—in hallucination detection, yielding more accurate and robust uncertainty estimates.
📝 Abstract
Reliable uncertainty quantification (UQ) is essential for deploying large language models (LLMs) in safety-critical scenarios, as it enables them to abstain from responding when uncertain, thereby avoiding hallucinating falsehoods. However, state-of-the-art UQ methods primarily rely on semantic probability distributions or pairwise distances, overlooking latent semantic structural information that could enable more precise uncertainty estimates. This paper presents Semantic Structural Entropy (SeSE), a principled UQ framework that quantifies the inherent semantic uncertainty of LLMs from a structural information perspective for hallucination detection. Specifically, to effectively model semantic spaces, we first develop an adaptively sparsified directed semantic graph construction algorithm that captures directional semantic dependencies while automatically pruning unnecessary connections that introduce negative interference. We then exploit latent semantic structural information through hierarchical abstraction: SeSE is defined as the structural entropy of the optimal semantic encoding tree, formalizing intrinsic uncertainty within semantic spaces after optimal compression. A higher SeSE value corresponds to greater uncertainty, indicating that LLMs are highly likely to generate hallucinations. In addition, to enhance fine-grained UQ in long-form generation -- where existing methods often rely on heuristic sample-and-count techniques -- we extend SeSE to quantify the uncertainty of individual claims by modeling their random semantic interactions, providing theoretically explicable hallucination detection. Extensive experiments across 29 model-dataset combinations show that SeSE significantly outperforms advanced UQ baselines, including strong supervised methods and the recently proposed KLE.