🤖 AI Summary
This work addresses the covert fallacy of “misquoting biomedical literature” in health-related misinformation—where scientific sources are superficially cited but systematically misrepresented to support false claims. We introduce MissciPlus, the first benchmark dataset comprising real-world misquotation passages paired with corresponding false assertions, covering diverse logical fallacies. Methodologically, we integrate retrieval models, large language model (LLM)-based reasoning, and evidence-level fact-checking to jointly detect fallacies, localize misleading passages, and provide interpretable attributions. Experiments reveal that existing fact-checking models largely fail to identify misquotation-based fallacies, and that misquotation significantly increases LLMs’ credulity toward false claims. This study establishes the first logic-fallacy benchmark grounded in authentic scientific misuse scenarios, advancing evidence-aware fact-checking research focused on source credibility and interpretability.
📝 Abstract
Health-related misinformation claims often falsely cite a credible biomedical publication as evidence. These publications only superficially seem to support the false claim, when logical fallacies are applied. In this work, we aim to detect and to highlight such fallacies, which requires assessing the exact content of the misrepresented publications. To achieve this, we introduce MissciPlus, an extension of the fallacy detection dataset Missci. MissciPlus extends Missci by grounding the applied fallacies in real-world passages from misrepresented studies. This creates a realistic test-bed for detecting and verbalizing fallacies under real-world input conditions, and enables new and realistic passage-retrieval tasks. MissciPlus is the first logical fallacy dataset which pairs the real-world misrepresented evidence with incorrect claims, identical to the input to evidence-based fact-checking models. With MissciPlus, we i) benchmark retrieval models in identifying passages that support claims only with fallacious reasoning, ii) evaluate how well LLMs verbalize fallacious reasoning based on misrepresented scientific passages, and iii) assess the effectiveness of fact-checking models in refuting claims that misrepresent biomedical research. Our findings show that current fact-checking models struggle to use misrepresented scientific passages to refute misinformation. Moreover, these passages can mislead LLMs into accepting false claims as true.