🤖 AI Summary
This work presents the first systematic evaluation of retrieval-augmented language models’ (RALMs) “rejection capability”—their ability to identify and abstain from answering unknown questions—addressing a critical gap in hallucination research, which has largely overlooked model calibration and rejection behavior. We find that RALMs suffer from pervasive over-rejection, and discover that context fine-tuning mitigates this issue, whereas rejection-aware instruction tuning (R-tuning) exacerbates it. To address this, we propose a lightweight rejection mechanism that jointly leverages internal confidence scores and external retrieval signals, requiring no additional parameters. Our method significantly improves rejection accuracy (+12.3%) while also enhancing final answer quality (EM +4.1%). This work provides both theoretical insights and practical solutions for building more reliable, well-calibrated RALMs.
📝 Abstract
Existing Large Language Models (LLMs) occasionally generate plausible yet factually incorrect responses, known as hallucinations. Researchers are primarily using two approaches to mitigate hallucinations, namely Retrieval Augmented Language Models (RALMs) and refusal post-training. However, current research predominantly emphasizes their individual effectiveness while overlooking the evaluation of the refusal capability of RALMs. In this study, we ask the fundamental question: Do RALMs know when they don't know? Specifically, we ask three questions. First, are RALMs well-calibrated regarding different internal and external knowledge states? We examine the influence of various factors. Contrary to expectations, we find that LLMs exhibit significant extbf{over-refusal} behavior. Then, how does refusal post-training affect the over-refusal issue? We investigate the Refusal-aware Instruction Tuning and In-Context Fine-tuning methods. Our results show that the over-refusal problem is mitigated by In-context fine-tuning. but magnified by R-tuning. However, we also find that the refusal ability may conflict with the quality of the answer. Finally, we develop a simple yet effective refusal method for refusal post-trained models to improve their overall answer quality in terms of refusal and correct answers. Our study provides a more comprehensive understanding of the influence of important factors on RALM systems.