🤖 AI Summary
Large language models (LLMs) frequently generate factually incorrect outputs—so-called “hallucinations”—severely limiting their trustworthy deployment in high-stakes domains such as healthcare and law. To address this, we propose Delta, a purely inference-time intervention that requires no parameter updates or additional training data. Delta employs contrastive decoding: it randomly masks tokens in the input prompt, computes the divergence between the output probability distributions of the original and masked prompts, and suppresses spurious token generation based on this distributional discrepancy. Crucially, Delta is the first method to enable fully training-free, distribution-aware inference-time correction. Experiments demonstrate substantial improvements: on SQuAD v2, the “no-answer” exact match (EM) increases by over 10 percentage points; overall EM rises by approximately 3, 6, 7, and 2 points on SQuAD v1.1, SQuAD v2, TriviaQA, and Natural Questions, respectively—effectively mitigating hallucinations induced by contextual ambiguity.
📝 Abstract
Large language models (LLMs) demonstrate strong capabilities in natural language processing but remain prone to hallucinations, generating factually incorrect or fabricated content. This issue undermines their reliability, particularly in high-stakes domains such as healthcare and legal advisory. To address this challenge, we propose Delta, an inference-time method that reduces hallucinations without requiring model retraining or additional data. Delta works by randomly masking parts of the input prompt and contrasting the output distributions for the original and masked inputs, effectively suppressing hallucinations through inference-only computations. We evaluate Delta on context-rich question-answering benchmarks, achieving absolute improvements of approximately 3 and 6 percentage points on SQuAD v1.1 and v2, respectively, and 7 and 2 percentage points on TriviaQA and Natural Questions under-sampling decoding. Delta also improves the no-answer exact match score on SQuAD v2 by over ten percentage points, demonstrating its effectiveness in mitigating hallucinations arising from contextual ambiguity. These results highlight Delta as a computationally efficient and scalable approach for improving the reliability of LLMs in real-world applications.