LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fake news detection benchmarks are static and fail to evaluate models’ reasoning capabilities under evolving information and temporal uncertainty, while also being susceptible to data contamination. To address this, this work proposes LiveFact, a dynamic, time-aware evaluation framework that simulates real-world informational chaos through continuously updated temporal evidence streams and incorporates an explicit contamination monitoring mechanism. LiveFact employs a dual-mode evaluation paradigm—combining classification and reasoning tasks—and systematically assesses 22 large language models. The evaluation reveals, for the first time, a “reasoning gap” phenomenon wherein models struggle with early-stage inference under insufficient evidence. Experiments demonstrate that open-source MoE models, such as Qwen3-235B-A22B, achieve performance on par with or superior to leading closed-source systems on LiveFact, validating the benchmark’s effectiveness in evaluating model robustness and temporal reasoning capabilities.
📝 Abstract
The rapid development of Large Language Models (LLMs) has transformed fake news detection and fact-checking tasks from simple classification to complex reasoning. However, evaluation frameworks have not kept pace. Current benchmarks are static, making them vulnerable to benchmark data contamination (BDC) and ineffective at assessing reasoning under temporal uncertainty. To address this, we introduce LiveFact a continuously updated benchmark that simulates the real-world "fog of war" in misinformation detection. LiveFact uses dynamic, temporal evidence sets to evaluate models on their ability to reason with evolving, incomplete information rather than on memorized knowledge. We propose a dual-mode evaluation: Classification Mode for final verification and Inference Mode for evidence-based reasoning, along with a component to monitor BDC explicitly. Tests with 22 LLMs show that open-source Mixture-of-Experts models, such as Qwen3-235B-A22B, now match or outperform proprietary state-of-the-art systems. More importantly, our analysis finds a significant "reasoning gap." Capable models exhibit epistemic humility by recognizing unverifiable claims in early data slices-an aspect traditional static benchmarks overlook. LiveFact sets a sustainable standard for evaluating robust, temporally aware AI verification.
Problem

Research questions and friction points this paper is trying to address.

fake news detection
benchmark data contamination
temporal uncertainty
dynamic evaluation
reasoning gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic benchmark
temporal reasoning
benchmark data contamination
evidence-based inference
epistemic humility
🔎 Similar Papers
No similar papers found.
C
Cheng Xu
University College Dublin and Bebxy
C
Changhong Jin
University College Dublin
Y
Yingjie Niu
University College Dublin
N
Nan Yan
Georgia Institute of Technology and Bebxy
Y
Yuke Mei
Bebxy
Shuhao Guan
Shuhao Guan
University College Dublin
L
Liming Chen
Dalian University of Technology
M
M-Tahar Kechadi
University College Dublin