🤖 AI Summary
Biomedical misinformation—such as vaccine hesitancy and pseudoscientific therapies—undermines public health trust. To address this, we propose CER, a framework for automated detection and interpretable verification of biomedical claims. CER integrates scientific literature retrieval, large language model (LLM)-based evidence-augmented reasoning, and supervised factual consistency prediction. Crucially, it constrains LLM generation using high-quality, retrieved scientific evidence, substantially mitigating hallucination and enabling robust cross-dataset generalization. Evaluated on three authoritative benchmarks—HealthFC, BioASQ-7b, and SciFact—CER achieves state-of-the-art performance. All code, datasets, and models are publicly released to foster reproducible, transparent medical fact-checking research.
📝 Abstract
Misinformation in healthcare, from vaccine hesitancy to unproven treatments, poses risks to public health and trust in medical systems. While machine learning and natural language processing have advanced automated fact-checking, validating biomedical claims remains uniquely challenging due to complex terminology, the need for domain expertise, and the critical importance of grounding in scientific evidence. We introduce CER (Combining Evidence and Reasoning), a novel framework for biomedical fact-checking that integrates scientific evidence retrieval, reasoning via large language models, and supervised veracity prediction. By integrating the text-generation capabilities of large language models with advanced retrieval techniques for high-quality biomedical scientific evidence, CER effectively mitigates the risk of hallucinations, ensuring that generated outputs are grounded in verifiable, evidence-based sources. Evaluations on expert-annotated datasets (HealthFC, BioASQ-7b, SciFact) demonstrate state-of-the-art performance and promising cross-dataset generalization. Code and data are released for transparency and reproducibility: https://github.com/PRAISELab-PicusLab/CER