🤖 AI Summary
Semantic entropy (SE), widely used for uncertainty estimation in large language model (LLM) long-text generation, suffers from inaccurate uncertainty quantification due to its neglect of intra-cluster compactness and inter-cluster separation.
Method: We propose a pairwise semantic similarity–based uncertainty quantification method—the first to incorporate pairwise similarity modeling into uncertainty estimation. We theoretically prove it is a strict generalization of SE and support both black-box (requiring only output embeddings) and white-box (leveraging token-level probabilities) deployment without fine-tuning. Built upon a nearest-neighbor entropy estimation framework, it jointly models intra- and inter-cluster semantic relationships in the embedding space.
Results: Evaluated on Phi-3 and Llama-3 across question answering, summarization, and machine translation, our method reduces expected calibration error (ECE) by 27.4% on average and significantly improves hallucination detection. The implementation is publicly available.
📝 Abstract
Hallucination in large language models (LLMs) can be detected by assessing the uncertainty of model outputs, typically measured using entropy. Semantic entropy (SE) enhances traditional entropy estimation by quantifying uncertainty at the semantic cluster level. However, as modern LLMs generate longer one-sentence responses, SE becomes less effective because it overlooks two crucial factors: intra-cluster similarity (the spread within a cluster) and inter-cluster similarity (the distance between clusters). To address these limitations, we propose a simple black-box uncertainty quantification method inspired by nearest neighbor estimates of entropy. Our approach can also be easily extended to white-box settings by incorporating token probabilities. Additionally, we provide theoretical results showing that our method generalizes semantic entropy. Extensive empirical results demonstrate its effectiveness compared to semantic entropy across two recent LLMs (Phi3 and Llama3) and three common text generation tasks: question answering, text summarization, and machine translation. Our code is available at https://github.com/BigML-CS-UCLA/SNNE.