SeSE: A Structural Information-Guided Uncertainty Quantification Framework for Hallucination Detection in LLMs

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit hallucinations in safety-critical applications due to inaccurate uncertainty quantification (UQ), and existing UQ methods—relying primarily on semantic probabilities or pairwise distances—neglect intrinsic semantic structural information. Method: We propose SeSE, the first structure-aware UQ framework for LLMs. SeSE constructs an adaptive sparse directed semantic graph over token representations and introduces a hierarchical abstraction mechanism to define Semantic Structure Entropy—a theoretically grounded metric capturing latent semantic uncertainty. Contribution/Results: SeSE enables fine-grained and long-context UQ without requiring strong supervision. Evaluated across 29 model–dataset combinations, it significantly outperforms state-of-the-art baselines—including KLE—in hallucination detection, yielding more accurate and robust uncertainty estimates.

Technology Category

Application Category

📝 Abstract
Reliable uncertainty quantification (UQ) is essential for deploying large language models (LLMs) in safety-critical scenarios, as it enables them to abstain from responding when uncertain, thereby avoiding hallucinating falsehoods. However, state-of-the-art UQ methods primarily rely on semantic probability distributions or pairwise distances, overlooking latent semantic structural information that could enable more precise uncertainty estimates. This paper presents Semantic Structural Entropy (SeSE), a principled UQ framework that quantifies the inherent semantic uncertainty of LLMs from a structural information perspective for hallucination detection. Specifically, to effectively model semantic spaces, we first develop an adaptively sparsified directed semantic graph construction algorithm that captures directional semantic dependencies while automatically pruning unnecessary connections that introduce negative interference. We then exploit latent semantic structural information through hierarchical abstraction: SeSE is defined as the structural entropy of the optimal semantic encoding tree, formalizing intrinsic uncertainty within semantic spaces after optimal compression. A higher SeSE value corresponds to greater uncertainty, indicating that LLMs are highly likely to generate hallucinations. In addition, to enhance fine-grained UQ in long-form generation -- where existing methods often rely on heuristic sample-and-count techniques -- we extend SeSE to quantify the uncertainty of individual claims by modeling their random semantic interactions, providing theoretically explicable hallucination detection. Extensive experiments across 29 model-dataset combinations show that SeSE significantly outperforms advanced UQ baselines, including strong supervised methods and the recently proposed KLE.
Problem

Research questions and friction points this paper is trying to address.

Quantifies LLM uncertainty using semantic structural information for hallucination detection
Models semantic spaces with adaptive graph construction and hierarchical abstraction
Extends framework to evaluate individual claim uncertainty in long-form generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively sparsified directed semantic graph construction
Structural entropy from optimal semantic encoding tree
Uncertainty quantification via random semantic interactions modeling
🔎 Similar Papers
No similar papers found.
X
Xingtao Zhao
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
H
Hao Peng
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
D
Dingli Su
School of Computer Science and Engineering, Beihang University, Beijing 100191, China
Xianghua Zeng
Xianghua Zeng
Beihang University
Structural Information PrinciplesReinforcement Learning
Chunyang Liu
Chunyang Liu
Didi Chuxing
Data MiningMarketplaceAutonomous Driving
J
Jinzhi Liao
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy