The Psychology of Falsehood: A Human-Centric Survey of Misinformation Detection

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current fact-checking systems overemphasize binary truth-falsity classification of claims, neglecting the deeper societal harms of misinformation—such as cognitive bias exploitation, social influence amplification, and emotional arousal—that stem from psychological mechanisms. This paper proposes a human-centered paradigm for misinformation detection, integrating cognitive science, social behavior modeling, and artificial intelligence to construct a multimodal framework that incorporates neurobehavioral features. Through a systematic review of psychology-informed detection approaches, we identify critical gaps in contextual adaptability, robustness, and generalizability of existing systems. Our core contribution is the first systematic formalization of the human perception–interpretation–response feedback loop, shifting the detection objective from “truth verification” to “harm identification.” This reframing provides both theoretical foundations and technical pathways to enhance the societal efficacy of misinformation governance.

Technology Category

Application Category

📝 Abstract
Misinformation remains one of the most significant issues in the digital age. While automated fact-checking has emerged as a viable solution, most current systems are limited to evaluating factual accuracy. However, the detrimental effect of misinformation transcends simple falsehoods; it takes advantage of how individuals perceive, interpret, and emotionally react to information. This underscores the need to move beyond factuality and adopt more human-centered detection frameworks. In this survey, we explore the evolving interplay between traditional fact-checking approaches and psychological concepts such as cognitive biases, social dynamics, and emotional responses. By analyzing state-of-the-art misinformation detection systems through the lens of human psychology and behavior, we reveal critical limitations of current methods and identify opportunities for improvement. Additionally, we outline future research directions aimed at creating more robust and adaptive frameworks, such as neuro-behavioural models that integrate technological factors with the complexities of human cognition and social influence. These approaches offer promising pathways to more effectively detect and mitigate the societal harms of misinformation.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of automated fact-checking systems
Integrating psychological factors like cognitive biases
Developing human-centered misinformation detection frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered detection frameworks
Integrating psychological concepts with fact-checking
Neuro-behavioural models combining technology and cognition
🔎 Similar Papers
No similar papers found.