Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
False news propagates rapidly on social media by leveraging social interactions (e.g., likes, shares) and user networks; conventional detection methods relying solely on textual analysis or simplistic labeling are vulnerable to automation bias and confirmation bias. To address this, we propose an interpretable multimodal fact-checking framework that jointly models semantic text features, user behavioral signals, and social graph structural properties using graph neural networks and explainable AI techniques. We introduce a novel evaluation protocol quantifying explanation quality along three dimensions: comprehensibility, credibility, and robustness. Experiments on English, Spanish, and Portuguese datasets demonstrate that our multimodal approach significantly outperforms unimodal baselines. The generated explanations are both clear and reliable, substantially enhancing user trust in system decisions while mitigating automation and confirmation biases.

Technology Category

Application Category

📝 Abstract
The global spread of misinformation and concerns about content trustworthiness have driven the development of automated fact-checking systems. Since false information often exploits social media dynamics such as "likes" and user networks to amplify its reach, effective solutions must go beyond content analysis to incorporate these factors. Moreover, simply labelling content as false can be ineffective or even reinforce biases such as automation and confirmation bias. This paper proposes an explainable framework that combines content, social media, and graph-based features to enhance fact-checking. It integrates a misinformation classifier with explainability techniques to deliver complete and interpretable insights supporting classification decisions. Experiments demonstrate that multimodal information improves performance over single modalities, with evaluations conducted on datasets in English, Spanish, and Portuguese. Additionally, the framework's explanations were assessed for interpretability, trustworthiness, and robustness with a novel protocol, showing that it effectively generates human-understandable justifications for its predictions.
Problem

Research questions and friction points this paper is trying to address.

Combating misinformation spread via social media dynamics
Enhancing fact-checking with explainable multimodal analysis
Improving interpretability of automated misinformation detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines content, social media, and graph features
Integrates explainability techniques for interpretable insights
Evaluated on multilingual datasets for robustness
🔎 Similar Papers
No similar papers found.