🤖 AI Summary
To address persistent context hallucinations in retrieval-augmented generation (RAG) despite the use of large language models (LLMs), this paper proposes AggTruth—a novel online hallucination detection method grounded in internal attention mechanisms. AggTruth aggregates multi-head self-attention score distributions and systematically investigates the impact of various aggregation strategies (e.g., mean, max, entropy-weighted) and attention head selection on detection performance; it further enhances discriminative capability via feature selection. Experiments across multiple state-of-the-art LLMs—including Llama-2/3, Qwen, and Phi-3—demonstrate that AggTruth achieves strong robustness both cross-task and within-task, significantly outperforming existing SOTA detectors. Crucially, it requires no additional training or external annotations, operating fully online with minimal computational overhead.
📝 Abstract
In real-world applications, Large Language Models (LLMs) often hallucinate, even in Retrieval-Augmented Generation (RAG) settings, which poses a significant challenge to their deployment. In this paper, we introduce AggTruth, a method for online detection of contextual hallucinations by analyzing the distribution of internal attention scores in the provided context (passage). Specifically, we propose four different variants of the method, each varying in the aggregation technique used to calculate attention scores. Across all LLMs examined, AggTruth demonstrated stable performance in both same-task and cross-task setups, outperforming the current SOTA in multiple scenarios. Furthermore, we conducted an in-depth analysis of feature selection techniques and examined how the number of selected attention heads impacts detection performance, demonstrating that careful selection of heads is essential to achieve optimal results.