Robust Hallucination Detection in LLMs via Adaptive Token Selection

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hallucination detection methods for large language models (LLMs) suffer from poor robustness—particularly under variable-length and sparse hallucination distributions—due to their reliance on fixed internal representations, leading to significant performance fluctuations. To address this, we propose HaMI, a robust hallucination detection framework based on adaptive key-token selection. HaMI formulates hallucination detection as a sequence-level multiple-instance learning (MIL) task, jointly optimizing token-level representation selection and hallucination classification in an end-to-end differentiable manner. Crucially, HaMI introduces the first dynamic token selection mechanism, eliminating dependence on predefined positions or static token sets and enabling deeper exploitation of LLMs’ internal semantic representations. Evaluated on four mainstream hallucination benchmarks, HaMI substantially outperforms state-of-the-art methods, achieving significant gains in both detection accuracy and cross-domain generalization capability.

Technology Category

Application Category

📝 Abstract
Hallucinations in large language models (LLMs) pose significant safety concerns that impede their broader deployment. Recent research in hallucination detection has demonstrated that LLMs' internal representations contain truthfulness hints, which can be harnessed for detector training. However, the performance of these detectors is heavily dependent on the internal representations of predetermined tokens, fluctuating considerably when working on free-form generations with varying lengths and sparse distributions of hallucinated entities. To address this, we propose HaMI, a novel approach that enables robust detection of hallucinations through adaptive selection and learning of critical tokens that are most indicative of hallucinations. We achieve this robustness by an innovative formulation of the Hallucination detection task as Multiple Instance (HaMI) learning over token-level representations within a sequence, thereby facilitating a joint optimisation of token selection and hallucination detection on generation sequences of diverse forms. Comprehensive experimental results on four hallucination benchmarks show that HaMI significantly outperforms existing state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in large language models robustly
Adaptively select critical tokens for hallucination detection
Improve detection performance on diverse generation sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive token selection for hallucination detection
Multiple Instance learning over token-level representations
Joint optimization of token selection and detection
🔎 Similar Papers
No similar papers found.