Combining Evidence and Reasoning for Biomedical Fact-Checking

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Misinformation in healthcare—such as vaccine hesitancy and pseudoscientific therapies—undermines public trust in health systems. Biomedical claim verification faces challenges including domain-specific terminology, reliance on expert-curated evidence, and hallucination in large language models (LLMs). To address these, we propose CER: a framework that integrates scientific literature retrieval, LLM-based chain-of-thought reasoning, and supervised truth classification—enabling evidence-driven, interpretable, and verifiable fact-checking. Our key innovation lies in jointly modeling retrieval-augmented reasoning and supervised learning, which substantially mitigates hallucination while improving evidence fidelity and cross-domain generalization. CER achieves state-of-the-art performance on three biomedical benchmarks—HealthFC, BioASQ-7b, and SciFact. All code and data are publicly released to ensure reproducibility.

Technology Category

Application Category

📝 Abstract
Misinformation in healthcare, from vaccine hesitancy to unproven treatments, poses risks to public health and trust in medical sys- tems. While machine learning and natural language processing have advanced automated fact-checking, validating biomedical claims remains uniquely challenging due to complex terminol- ogy, the need for domain expertise, and the critical importance of grounding in scientific evidence. We introduce CER (Combin- ing Evidence and Reasoning), a novel framework for biomedical fact-checking that integrates scientific evidence retrieval, reasoning via large language models, and supervised veracity prediction. By integrating the text-generation capabilities of large language mod- els with advanced retrieval techniques for high-quality biomedical scientific evidence, CER effectively mitigates the risk of halluci- nations, ensuring that generated outputs are grounded in veri- fiable, evidence-based sources. Evaluations on expert-annotated datasets (HealthFC, BioASQ-7b, SciFact) demonstrate state-of-the- art performance and promising cross-dataset generalization. Code and data are released for transparency and reproducibility: https: //github.com/PRAISELab-PicusLab/CER.
Problem

Research questions and friction points this paper is trying to address.

Validating biomedical claims against scientific evidence
Mitigating hallucinations in automated fact-checking systems
Integrating evidence retrieval with reasoning for healthcare misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining evidence retrieval with reasoning
Integrating LLMs with scientific evidence sources
Mitigating hallucinations via verifiable evidence grounding
🔎 Similar Papers
No similar papers found.