🤖 AI Summary
This work addresses hallucination detection in large language model (LLM) outputs by proposing the Belief Tree Propagation (BTProp) framework. BTProp introduces a novel recursive three-strategy sentence decomposition coupled with a hidden Markov tree model, explicitly structuring belief dependencies among logically related statements and enabling layer-wise propagation of continuous belief scores alongside multi-level semantic consistency fusion. Compared to baseline methods such as self-consistency, BTProp achieves 3–9% improvements in AUROC and AUC-PR across multiple hallucination detection benchmarks, significantly enhancing factual reliability assessment. Its core innovations are twofold: (1) the first formulation of belief reasoning as a tree-structured probabilistic graphical model, and (2) the realization of fine-grained, interpretable hallucination localization guided by logical dependency relations.
📝 Abstract
This paper focuses on the task of hallucination detection, which aims to determine the truthfulness of LLM-generated statements. To address this problem, a popular class of methods utilize the LLM's self-consistencies in its beliefs in a set of logically related augmented statements generated by the LLM, which does not require external knowledge databases and can work with both white-box and black-box LLMs. However, in many existing approaches, the augmented statements tend to be very monotone and unstructured, which makes it difficult to integrate meaningful information from the LLM beliefs in these statements. Also, many methods work with the binarized version of the LLM's belief, instead of the continuous version, which significantly loses information. To overcome these limitations, in this paper, we propose Belief Tree Propagation (BTProp), a probabilistic framework for LLM hallucination detection. BTProp introduces a belief tree of logically related statements by recursively decomposing a parent statement into child statements with three decomposition strategies, and builds a hidden Markov tree model to integrate the LLM's belief scores in these statements in a principled way. Experiment results show that our method improves baselines by 3%-9% (evaluated by AUROC and AUC-PR) on multiple hallucination detection benchmarks. Code is available at https://github.com/UCSB-NLP-Chang/BTProp.