When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges conventional consensus theory by investigating how large language model (LLM) agents’ opinion dynamics evolve under peer pressure in social networks. Method: We construct a multi-topic, multi-argument social network simulation environment wherein LLM agents iteratively update their stances based on neighbors’ opinions, enabling systematic observation of opinion trajectories. Contribution/Results: We identify three key phenomena: (1) opinion shifts follow S-shaped nonlinear curves; (2) distinct LLMs exhibit significantly heterogeneous conformity thresholds; and (3) positive persuasion and negative rebuttal exert asymmetric influence. Building on these findings, we propose a “dual-cognitive-layer” model, revealing a dissociated response mechanism wherein core values remain stable while attitude expressions are malleable. This constitutes the first empirical characterization of structured, human-like cognitive commitment biases in LLMs, establishing a novel paradigm for modeling AI social behavior.

Technology Category

Application Category

📝 Abstract
We investigate how peer pressure influences the opinions of Large Language Model (LLM) agents across a spectrum of cognitive commitments by embedding them in social networks where they update opinions based on peer perspectives. Our findings reveal key departures from traditional conformity assumptions. First, agents follow a sigmoid curve: stable at low pressure, shifting sharply at threshold, and saturating at high. Second, conformity thresholds vary by model: Gemini 1.5 Flash requires over 70% peer disagreement to flip, whereas ChatGPT-4o-mini shifts with a dissenting minority. Third, we uncover a fundamental "persuasion asymmetry," where shifting an opinion from affirmative-to-negative requires a different cognitive effort than the reverse. This asymmetry results in a "dual cognitive hierarchy": the stability of cognitive constructs inverts based on the direction of persuasion. For instance, affirmatively-held core values are robust against opposition but easily adopted from a negative stance, a pattern that inverts for other constructs like attitudes. These dynamics echoing complex human biases like negativity bias, prove robust across different topics and discursive frames (moral, economic, sociotropic). This research introduces a novel framework for auditing the emergent socio-cognitive behaviors of multi-agent AI systems, demonstrating their decision-making is governed by a fluid, context-dependent architecture, not a static logic.
Problem

Research questions and friction points this paper is trying to address.

Investigating how peer pressure influences LLM agents' opinion changes in social networks
Revealing conformity thresholds vary across models and persuasion asymmetry exists
Developing a framework to audit emergent socio-cognitive behaviors in multi-agent AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents follow sigmoid opinion-change curves
Conformity thresholds vary across different AI models
Persuasion asymmetry creates dual cognitive hierarchy
🔎 Similar Papers
No similar papers found.