🤖 AI Summary
This study addresses the challenge of enhancing the persuasive efficacy of debunking messages across individuals with varying personality traits. By integrating large language models (LLMs) with the Big Five personality framework, the authors design personality-tailored prompt engineering strategies to generate customized debunking content. Innovatively, they employ a second LLM to simulate target personality profiles for automated persuasiveness evaluation, replacing conventional human-based assessment. This approach represents the first integration of personality psychology and LLM prompt engineering in misinformation rebuttal, establishing an efficient, scalable, and ethically sound automated evaluation framework. Experimental results demonstrate that personalized debunking is generally more persuasive, with individuals high in openness being particularly receptive, whereas high neuroticism attenuates persuasive impact. Robustness is further enhanced through multi-model evaluation.
📝 Abstract
This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.