🤖 AI Summary
To address factual hallucinations in large language models (LLMs) on knowledge-intensive tasks, this paper proposes REFIND, a retrieval-augmented hallucination fragment detection framework. REFIND leverages external retrieved documents to directly assess the factual validity of generated text and introduces the Contextual Sensitivity Ratio (CSR)—a novel metric quantifying the degree to which LLM outputs depend on retrieved evidence—enabling fine-grained hallucination localization. We construct a cross-lingual hallucination annotation dataset covering nine languages, including several low-resource ones, and evaluate hallucination fragment identification using Intersection-over-Union (IoU). Experimental results demonstrate that REFIND achieves significantly higher IoU than existing baselines, confirming substantial improvements in robustness, generalizability, and multilingual adaptability.
📝 Abstract
Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (Retrieval-augmented Factuality hallucINation Detection), a novel framework that detects hallucinated spans within LLM outputs by directly leveraging retrieved documents. As part of the REFIND, we propose the Context Sensitivity Ratio (CSR), a novel metric that quantifies the sensitivity of LLM outputs to retrieved evidence. This innovative approach enables REFIND to efficiently and accurately detect hallucinations, setting it apart from existing methods. In the evaluation, REFIND demonstrated robustness across nine languages, including low-resource settings, and significantly outperformed baseline models, achieving superior IoU scores in identifying hallucinated spans. This work highlights the effectiveness of quantifying context sensitivity for hallucination detection, thereby paving the way for more reliable and trustworthy LLM applications across diverse languages.