🤖 AI Summary
This work proposes a knowledge graph–driven contrastive reasoning framework to address the limitations in accuracy and interpretability in automated fact-checking. By constructing knowledge graphs that capture relationships between claims and supporting evidence, the method automatically generates contrastive questions to guide large language models toward focusing on critical evidence and producing interpretable summaries that support veracity judgments. This study is the first to integrate knowledge graph–guided contrastive reasoning into the fact-checking pipeline of large language models, substantially enhancing their capacity for evidence integration and logical inference. Experimental results on the LIAR-RAW and RAWFC datasets demonstrate that the proposed approach achieves state-of-the-art performance, significantly improving fact-checking accuracy.
📝 Abstract
Claim verification is a core component of automated fact-checking systems, aimed at determining the truthfulness of a statement by assessing it against reliable evidence sources such as documents or knowledge bases. This work presents KG-CRAFT, a method that improves automatic claim verification by leveraging large language models (LLMs) augmented with contrastive questions grounded in a knowledge graph. KG-CRAFT first constructs a knowledge graph from claims and associated reports, then formulates contextually relevant contrastive questions based on the knowledge graph structure. These questions guide the distillation of evidence-based reports, which are synthesised into a concise summary that is used for veracity assessment by LLMs. Extensive evaluations on two real-world datasets (LIAR-RAW and RAWFC) demonstrate that our method achieves a new state-of-the-art in predictive performance. Comprehensive analyses validate in detail the effectiveness of our knowledge graph-based contrastive reasoning approach in improving LLMs'fact-checking capabilities.