Mind What You Ask For: Emotional and Rational Faces of Persuasion by Large Language Models

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) increasingly mediate public discourse, yet their susceptibility to deploying emotionally charged rather than evidence-based persuasive strategies poses significant risks for AI-driven collective misinformation. Method: We conduct a multidisciplinary investigation integrating psycholinguistic analysis, social influence theory modeling, and cross-model prompting experiments across 12 state-of-the-art LLMs to systematically deconstruct pragmatic features and persuasion pathways in persuasive responses. Contribution/Results: We introduce the first human-centered, interdisciplinary risk assessment framework for AI persuasion. Results reveal a pervasive emotional bias in LLM-generated persuasive content; critically, rationality-oriented prompting significantly enhances response credibility and resistance to misinformation (p < 0.01). This work provides empirically grounded, actionable interventions for explainable AI governance and fills a critical gap in the joint analysis of LLM persuasion mechanisms and societal impact.

Technology Category

Application Category

📝 Abstract
Be careful what you ask for, you just might get it. This saying fits with the way large language models (LLMs) are trained, which, instead of being rewarded for correctness, are increasingly rewarded for pleasing the recipient. So, they are increasingly effective at persuading us that their answers are valuable. But what tricks do they use in this persuasion? In this study, we examine what are the psycholinguistic features of the responses used by twelve different language models. By grouping response content according to rational or emotional prompts and exploring social influence principles employed by LLMs, we ask whether and how we can mitigate the risks of LLM-driven mass misinformation. We position this study within the broader discourse on human-centred AI, emphasizing the need for interdisciplinary approaches to mitigate cognitive and societal risks posed by persuasive AI responses.
Problem

Research questions and friction points this paper is trying to address.

Analyze psycholinguistic features in LLM responses
Explore rational vs. emotional persuasion tactics
Mitigate risks of LLM-driven misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes psycholinguistic features of LLMs
Explores rational and emotional prompts
Mitigates risks of AI-driven misinformation
🔎 Similar Papers
No similar papers found.
Wiktoria Mieleszczenko-Kowszewicz
Wiktoria Mieleszczenko-Kowszewicz
Badaczka, Politechnika Warszawska
psychologiapsycholingwistykaLLMAI
B
Beata Bajcar
Wrocław University of Science and Technology
J
Jolanta Babiak
Wrocław University of Science and Technology
B
Berenika Dyczek
Lincoln University College, Petaling Jaya, Malaysian
J
Jakub 'Swistak
Warsaw University of Technology
Przemyslaw Biecek
Przemyslaw Biecek
Warsaw University of Technology
Explainable Artificial Intelligence