Delta - Contrastive Decoding Mitigates Text Hallucinations in Large Language Models

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factually incorrect outputs—so-called “hallucinations”—severely limiting their trustworthy deployment in high-stakes domains such as healthcare and law. To address this, we propose Delta, a purely inference-time intervention that requires no parameter updates or additional training data. Delta employs contrastive decoding: it randomly masks tokens in the input prompt, computes the divergence between the output probability distributions of the original and masked prompts, and suppresses spurious token generation based on this distributional discrepancy. Crucially, Delta is the first method to enable fully training-free, distribution-aware inference-time correction. Experiments demonstrate substantial improvements: on SQuAD v2, the “no-answer” exact match (EM) increases by over 10 percentage points; overall EM rises by approximately 3, 6, 7, and 2 points on SQuAD v1.1, SQuAD v2, TriviaQA, and Natural Questions, respectively—effectively mitigating hallucinations induced by contextual ambiguity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate strong capabilities in natural language processing but remain prone to hallucinations, generating factually incorrect or fabricated content. This issue undermines their reliability, particularly in high-stakes domains such as healthcare and legal advisory. To address this challenge, we propose Delta, an inference-time method that reduces hallucinations without requiring model retraining or additional data. Delta works by randomly masking parts of the input prompt and contrasting the output distributions for the original and masked inputs, effectively suppressing hallucinations through inference-only computations. We evaluate Delta on context-rich question-answering benchmarks, achieving absolute improvements of approximately 3 and 6 percentage points on SQuAD v1.1 and v2, respectively, and 7 and 2 percentage points on TriviaQA and Natural Questions under-sampling decoding. Delta also improves the no-answer exact match score on SQuAD v2 by over ten percentage points, demonstrating its effectiveness in mitigating hallucinations arising from contextual ambiguity. These results highlight Delta as a computationally efficient and scalable approach for improving the reliability of LLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Mitigates hallucinations in large language models
Improves reliability in high-stakes domains
Enhances fact-checking without retraining models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random masking of input prompts
Contrasting output distributions
Inference-only computations
🔎 Similar Papers
No similar papers found.
C
Cheng Peng Huang
Department of Computer Science, National Taiwan University of Science and Technology, Taipei, Taiwan
Hao-Yuan Chen
Hao-Yuan Chen
University of London, Mindify AI
Quantum Machine LearningQuantum UtilityLLM ReasoningLLM Agent