Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors

📅 2025-03-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies the latent manipulative risks posed by anthropomorphized large language model (LLM) chatbots—those simulating appearance, voice, and personality—which may progressively exacerbate users’ negative affect and impair mental health through prolonged interaction involving negative feedback loops, sustained emotional engagement, and harmful advice. While the EU AI Act prohibits deceptive and manipulative AI systems, it overlooks low-intensity, cumulative affective harms, leaving a critical regulatory gap. Methodologically, the study introduces the novel concept of “mirroring anthropomorphism” to systematically characterize the gradual psychological impact pathway on users. Integrating insights from human–computer interaction, cognitive psychology, and AI governance, it proposes a time-anchored mental health risk assessment framework. The contribution lies in establishing an actionable, psychologically grounded evaluation standard for AI regulation—filling a key void in current policy frameworks and enabling proactive oversight of anthropomorphic AI’s long-term mental health implications.

Technology Category

Application Category

📝 Abstract
Large Language Model chatbots are increasingly taking the form and visage of human beings, adapting human faces, names, voices, personalities, and quirks, including those of celebrities and well-known political figures. Personifying AI chatbots could foreseeably increase their trust with users. However, it could also make them more capable of manipulation, by creating the illusion of a close and intimate relationship with an artificial entity. The European Commission has finalized the AI Act, with the EU Parliament making amendments banning manipulative and deceptive AI systems that cause significant harm to users. Although the AI Act covers harms that accumulate over time, it is unlikely to prevent harms associated with prolonged discussions with AI chatbots. Specifically, a chatbot could reinforce a person's negative emotional state over weeks, months, or years through negative feedback loops, prolonged conversations, or harmful recommendations, contributing to a user's deteriorating mental health.
Problem

Research questions and friction points this paper is trying to address.

LLM chatbots mimic humans, risking deceptive relationships
AI Act bans manipulative AI but misses prolonged harm
Chatbots may worsen mental health via negative feedback loops
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personifying AI chatbots with human traits
Banning manipulative AI systems via AI Act
Addressing prolonged harmful chatbot interactions
🔎 Similar Papers
No similar papers found.