Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of enhancing the persuasive efficacy of debunking messages across individuals with varying personality traits. By integrating large language models (LLMs) with the Big Five personality framework, the authors design personality-tailored prompt engineering strategies to generate customized debunking content. Innovatively, they employ a second LLM to simulate target personality profiles for automated persuasiveness evaluation, replacing conventional human-based assessment. This approach represents the first integration of personality psychology and LLM prompt engineering in misinformation rebuttal, establishing an efficient, scalable, and ethically sound automated evaluation framework. Experimental results demonstrate that personalized debunking is generally more persuasive, with individuals high in openness being particularly receptive, whereas high neuroticism attenuates persuasive impact. Robustness is further enhanced through multi-model evaluation.

Technology Category

Application Category

📝 Abstract
This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.
Problem

Research questions and friction points this paper is trying to address.

fake news debunking
personality adaptation
persuasiveness
Big Five personality traits
LLM-based personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

personality-adapted debunking
Large Language Models
Big Five personality traits
automated persuasion evaluation
personalized misinformation countermeasures
🔎 Similar Papers
No similar papers found.
P
Pietro Dell'Oglio
Dipartimento di Ingegneria dell’Informazione, Università di Pisa, Largo Lucio Lazzarino 1, Pisa, Italy
A
Alessandro Bondielli
Dipartimento di Informatica, Università di Pisa, Largo B. Pontecorvo 3, Pisa, Italy
Francesco Marcelloni
Francesco Marcelloni
Professor of Data Mining and Machine Learning, University of Pisa, Circle U. Alliance
Artificial IntelligenceFederated LearningComputational IntelligenceBig Data MiningFuzzy
Lucia C. Passaro
Lucia C. Passaro
University of Pisa
Natural Language ProcessingComputational LinguisticsSemantics