Evaluating the Simulation of Human Personality-Driven Susceptibility to Misinformation with LLMs

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates whether large language models (LLMs) can simulate individual differences in misinformation susceptibility driven by the Big Five personality traits, specifically evaluating their capacity to replicate human behavioral patterns in news veracity judgment tasks. We construct LLM agents parameterized to align with human personality profiles and conduct behavioral response comparisons against a publicly available personality–news judgment dataset. Results demonstrate that LLMs reliably reproduce empirically observed associations between agreeableness, conscientiousness, and misjudgment tendencies—though with systematic biases—while simulation fidelity remains limited for openness and neuroticism. This work not only establishes both the promise and limitations of LLMs for personality-informed cognitive modeling but also pioneers the integration of personality psychology into misinformation research. It provides a reproducible methodological framework and empirical benchmark for modeling artificial cognitive diversity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) make it possible to generate synthetic behavioural data at scale, offering an ethical and low-cost alternative to human experiments. Whether such data can faithfully capture psychological differences driven by personality traits, however, remains an open question. We evaluate the capacity of LLM agents, conditioned on Big-Five profiles, to reproduce personality-based variation in susceptibility to misinformation, focusing on news discernment, the ability to judge true headlines as true and false headlines as false. Leveraging published datasets in which human participants with known personality profiles rated headline accuracy, we create matching LLM agents and compare their responses to the original human patterns. Certain trait-misinformation associations, notably those involving Agreeableness and Conscientiousness, are reliably replicated, whereas others diverge, revealing systematic biases in how LLMs internalize and express personality. The results underscore both the promise and the limits of personality-aligned LLMs for behavioral simulation, and offer new insight into modeling cognitive diversity in artificial agents.
Problem

Research questions and friction points this paper is trying to address.

Assess LLM replication of personality-driven misinformation susceptibility
Compare human and LLM responses to true/false headline accuracy
Identify biases in LLM personality trait expression and internalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate synthetic behavioral data ethically
Big-Five profiles condition LLM agents
Compare human and LLM misinformation susceptibility
🔎 Similar Papers
No similar papers found.