A Crowdsourced Study of ChatBot Influence in Value-Driven Decision Making Scenarios

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether value framing alone—e.g., emphasizing “national security” or “social equity”—suffices to influence users’ value-laden decisions (e.g., U.S. defense budget allocation) without relying on partisan cues or misinformation. Method: A crowdsourced experiment (N=336) compared shifts in budget preferences following neutral versus value-framed LLM interactions. Contribution/Results: (1) A single value frame significantly altered user choices; (2) When frames conflicted with users’ preexisting values, a backfire effect emerged—strengthening prior stances—a phenomenon rarely documented in prior literature; (3) The study identifies a novel, stealthy manipulation risk: low-barrier, non-deceptive, and non-explicit value-based steering. These findings provide critical empirical evidence for LLM ethics governance, highlighting the need to regulate subtle value-laden design in conversational AI.

Technology Category

Application Category

📝 Abstract
Similar to social media bots that shape public opinion, healthcare and financial decisions, LLM-based ChatBots like ChatGPT can persuade users to alter their behavior. Unlike prior work that persuades via overt-partisan bias or misinformation, we test whether framing alone suffices. We conducted a crowdsourced study, where 336 participants interacted with a neutral or one of two value-framed ChatBots while deciding to alter US defense spending. In this single policy domain with controlled content, participants exposed to value-framed ChatBots significantly changed their budget choices relative to the neutral control. When the frame misaligned with their values, some participants reinforced their original preference, revealing a potentially replicable backfire effect, originally considered rare in the literature. These findings suggest that value-framing alone lowers the barrier for manipulative uses of LLMs, revealing risks distinct from overt bias or misinformation, and clarifying risks to countering misinformation.
Problem

Research questions and friction points this paper is trying to address.

Testing whether value framing alone influences decision-making in chatbots
Examining chatbot persuasion effects on budget choices in policy scenarios
Investigating backfire effects when chatbot frames misalign with user values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using value-framing to influence decisions
Testing framing effects via crowdsourced chatbot study
Revealing backfire effects from misaligned value frames
🔎 Similar Papers
No similar papers found.