Unsupervised Hallucination Detection by Inspecting Reasoning Processes

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unsupervised hallucination detection suffers from weak correlation between proxy signals and factual correctness, as well as poor generalization. To address this, we propose IRIS, the first framework that leverages intrinsic LLM representations—namely, context-aware embeddings and response uncertainty—that are fundamentally tied to truthfulness, enabling end-to-end unsupervised detection. IRIS guides the LLM to self-verify statement veracity, generating soft pseudo-labels and jointly optimizing both embedding and uncertainty modeling—without human annotations or external knowledge sources. On multiple benchmarks, IRIS substantially outperforms existing unsupervised methods. Notably, it maintains high performance even after fine-tuning on as few as ≤100 unlabeled samples, demonstrating low computational overhead, strong cross-domain generalization, and practical scalability.

Technology Category

Application Category

📝 Abstract
Unsupervised hallucination detection aims to identify hallucinated content generated by large language models (LLMs) without relying on labeled data. While unsupervised methods have gained popularity by eliminating labor-intensive human annotations, they frequently rely on proxy signals unrelated to factual correctness. This misalignment biases detection probes toward superficial or non-truth-related aspects, limiting generalizability across datasets and scenarios. To overcome these limitations, we propose IRIS, an unsupervised hallucination detection framework, leveraging internal representations intrinsic to factual correctness. IRIS prompts the LLM to carefully verify the truthfulness of a given statement, and obtain its contextualized embedding as informative features for training. Meanwhile, the uncertainty of each response is considered a soft pseudolabel for truthfulness. Experimental results demonstrate that IRIS consistently outperforms existing unsupervised methods. Our approach is fully unsupervised, computationally low cost, and works well even with few training data, making it suitable for real-time detection.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in LLM outputs without labeled data
Overcomes reliance on non-truth-related proxy signals
Uses internal representations and uncertainty for factual correctness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging internal representations for factual correctness
Using contextualized embeddings as informative training features
Considering response uncertainty as soft pseudolabel
🔎 Similar Papers
No similar papers found.
P
Ponhvoan Srey
Nanyang Technological University
Xiaobao Wu
Xiaobao Wu
Research Scientist, Nanyang Technological University
Large Language ModelsMachine LearningNatural Language Processing
A
Anh Tuan Luu
Nanyang Technological University