CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current hallucination detection methods for large language model (LLM) outputs suffer from insufficient precision. Method: This paper formalizes hallucination detection as fine-grained natural language inference (NLI) and proposes the first systematic, verifiable three-step structured reasoning framework: (1) claim decomposition, (2) sub-claim provenance tracing and entailment classification, and (3) multi-level aggregation for final decision-making. The framework integrates LLM-based chain-of-thought reasoning, evidence alignment, intermediate-step quality assessment, and interpretable metrics to ensure transparency and enable performance attribution. Contribution/Results: Experiments demonstrate substantial improvements in hallucination detection accuracy. Crucially, the quality of intermediate reasoning steps exhibits strong correlation with final detection performance. This work establishes a novel paradigm for trustworthy generative evaluation grounded in structured, auditable inference.

Technology Category

Application Category

📝 Abstract
A common approach to hallucination detection casts it as a natural language inference (NLI) task, often using LLMs to classify whether the generated text is entailed by corresponding reference texts. Since entailment classification is a complex reasoning task, one would expect that LLMs could benefit from generating an explicit reasoning process, as in CoT reasoning or the explicit ``thinking'' of recent reasoning models. In this work, we propose that guiding such models to perform a systematic and comprehensive reasoning process -- one that both decomposes the text into smaller facts and also finds evidence in the source for each fact -- allows models to execute much finer-grained and accurate entailment decisions, leading to increased performance. To that end, we define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection. Following this reasoning framework, we introduce an analysis scheme, consisting of several metrics that measure the quality of the intermediate reasoning steps, which provided additional empirical evidence for the improved quality of our guided reasoning scheme.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations via comprehensive entailment reasoning
Improve NLI accuracy with guided multi-step reasoning
Enhance fact decomposition and evidence attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decompose text into smaller facts
Find evidence for each fact
Aggregate classification for accuracy
🔎 Similar Papers
No similar papers found.