🤖 AI Summary
Existing fact-checking methods suffer from insufficient statement decomposition and ambiguous coreference resolution in LLM-driven statement parsing, leading to high verification complexity and low accuracy. To address these issues, this paper introduces the first approach that models statements as subject–predicate–object (SPO) triple-based graph structures, enabling fine-grained decomposition and explicit coreference disambiguation through graph construction. We propose a relation-constrained, graph-guided reasoning planning mechanism that supports interpretable, triple-wise verification. Furthermore, we design a triple-level semantic alignment framework integrated with LLM-coordinated graph operations. Evaluated on three benchmark datasets—FEVER, SciFact, and Climate-FEVER—our method achieves state-of-the-art performance, significantly improving fine-grained verification accuracy and robustness.
📝 Abstract
Fact-checking plays a crucial role in combating misinformation. Existing methods using large language models (LLMs) for claim decomposition face two key limitations: (1) insufficient decomposition, introducing unnecessary complexity to the verification process, and (2) ambiguity of mentions, leading to incorrect verification results. To address these challenges, we suggest introducing a claim graph consisting of triplets to address the insufficient decomposition problem and reduce mention ambiguity through graph structure. Based on this core idea, we propose a graph-based framework, GraphFC, for fact-checking. The framework features three key components: graph construction, which builds both claim and evidence graphs; graph-guided planning, which prioritizes the triplet verification order; and graph-guided checking, which verifies the triples one by one between claim and evidence graphs. Extensive experiments show that GraphFC enables fine-grained decomposition while resolving referential ambiguities through relational constraints, achieving state-of-the-art performance across three datasets.