π€ AI Summary
Addressing the challenges of causal fallacy identification and poor interpretability in automated fact-checking, this paper introduces fine-grained causal event relation modelingβthe first such approach for this task. We propose a novel method integrating event relation extraction, semantic similarity computation, and rule-driven causal consistency verification. Our approach explicitly models causal logic along event chains linking claims to evidence, enabling precise detection of causal direction errors, over-attribution, and other causal fallacies, while generating semantically rich, human-readable justifications. As the first benchmark method for causal reasoning in fact-checking, it achieves significant improvements on two mainstream datasets: +5.2β7.8 F1 points in causal error identification and a 32% gain in explanation quality (per human evaluation). This work establishes a new paradigm for interpretable, causally aware fact-checking.
π Abstract
In fact-checking applications, a common reason to reject a claim is to detect the presence of erroneous cause-effect relationships between the events at play. However, current automated fact-checking methods lack dedicated causal-based reasoning, potentially missing a valuable opportunity for semantically rich explainability. To address this gap, we propose a methodology that combines event relation extraction, semantic similarity computation, and rule-based reasoning to detect logical inconsistencies between chains of events mentioned in a claim and in an evidence. Evaluated on two fact-checking datasets, this method establishes the first baseline for integrating fine-grained causal event relationships into fact-checking and enhance explainability of verdict prediction.