Veracity: An Open-Source AI Fact-Checking System

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Misinformation proliferation poses severe threats to public cognition and social stability, with generative AI exacerbating its dissemination risks. To address this, we propose an open-source, transparent, and interpretable AI-powered fact-checking system that integrates large language models (LLMs) with real-time web retrieval agents within a retrieval-augmented generation (RAG) framework, enabling end-to-end, multilingual veracity assessment of user-submitted claims. Our system introduces a novel, intuitive numerical credibility score, accompanied by provenance-based evidence and natural-language explanations. It features a chat-like, user-friendly interface designed to enhance verification efficiency and trust among non-expert users. Evaluated across multilingual scenarios, the system achieves high accuracy while ensuring interpretability and public comprehensibility. By prioritizing reproducibility, auditability, and transparency, it establishes a new paradigm for AI-driven fact-checking grounded in open science principles.

Technology Category

Application Category

📝 Abstract
The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI. This demo paper introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze user-submitted claims and provide grounded veracity assessments with intuitive explanations. Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications. This paper will showcase Veracity's ability to not only detect misinformation but also explain its reasoning, fostering media literacy and promoting a more informed society.
Problem

Research questions and friction points this paper is trying to address.

Combating misinformation using open-source AI fact-checking
Analyzing claims with LLMs and web retrieval agents
Providing transparent veracity assessments and explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs and web retrieval agents
Provides multilingual veracity assessments
Features interactive messaging-like interface
T
Taylor Lynn Curtis
Mila
Maximilian Puelma Touzel
Maximilian Puelma Touzel
Mila, Universite de Montreal
behavioural modellingmachine learning
W
William Garneau
Nord AI
M
Manon Gruaz
Mila
M
Mike Pinder
Mila
L
Li Wei Wang
McGill University
S
Sukanya Krishna
Harvard University, Supervised Program for Alignment Research (SPAR)
L
Luda Cohen
Supervised Program for Alignment Research (SPAR)
J
Jean-François Godbout
Mila, Université de Montréal
Reihaneh Rabbany
Reihaneh Rabbany
Assistant Professor of Computer Science, McGill University; Canada CIFAR AI Chair, Mila
Data MiningMachine LearningGraph MiningNetwork ScienceComputational Social Science
Kellin Pelrine
Kellin Pelrine
FAR.AI
AI SecurityAI Agents