🤖 AI Summary
To address the challenge of closed-domain hallucinations and poor traceability in multi-step generation (MGS), this paper proposes the first end-to-end framework capable of both detecting and tracing hallucinations. Methodologically, it integrates intermediate-output consistency modeling, source dependency graph reasoning, and hierarchical faithfulness scoring to precisely localize the step at which hallucination is introduced and to trace the provenance path of faithful content. Contributions include: (1) the first traceable detection method for closed-domain hallucinations in MGS; (2) the construction of the first MGS benchmark dataset featuring complete intermediate outputs and human-annotated faithfulness labels; and (3) state-of-the-art performance on two self-constructed datasets, significantly outperforming baselines in both hallucination identification accuracy and step-level localization precision.
📝 Abstract
Even when instructed to adhere to source material, Language Models often generate unsubstantiated content - a phenomenon known as"closed-domain hallucination."This risk is amplified in processes with multiple generative steps (MGS), compared to processes with a single generative step (SGS). However, due to the greater complexity of MGS processes, we argue that detecting hallucinations in their final outputs is necessary but not sufficient: it is equally important to trace where hallucinated content was likely introduced and how faithful content may have been derived from the source through intermediate outputs. To address this need, we present VeriTrail, the first closed-domain hallucination detection method designed to provide traceability for both MGS and SGS processes. We also introduce the first datasets to include all intermediate outputs as well as human annotations of final outputs' faithfulness for their respective MGS processes. We demonstrate that VeriTrail outperforms baseline methods on both datasets.