Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates how large language model (LLM)-based agents influence opinion polarization in non-cooperative social games, focusing on three mechanisms: confirmation bias, resource constraints, and influence penalties. Method: We propose the first computational framework integrating social psychology and game theory, modeling LLM agents as bounded-rational actors endowed with cognitive biases, finite communication resources, and transmission costs, enabling quantitative analysis of their opinion-intervention dynamics. Contribution/Results: We uncover a non-monotonic relationship between confirmation bias strength and polarization: moderate bias fosters intra-group consensus, whereas excessive bias exacerbates global polarization. Moreover, high-cost fact-checking, though effective short-term, accelerates resource depletion and erodes long-term influence. These findings establish a computationally tractable, intervention-aware theoretical foundation and empirical basis for AI-driven public opinion governance.

Technology Category

Application Category

📝 Abstract
We introduce a novel non-cooperative game to analyse opinion formation and resistance, incorporating principles from social psychology such as confirmation bias, resource constraints, and influence penalties. Our simulation features Large Language Model (LLM) agents competing to influence a population, with penalties imposed for generating messages that propagate or counter misinformation. This framework integrates resource optimisation into the agents' decision-making process. Our findings demonstrate that while higher confirmation bias strengthens opinion alignment within groups, it also exacerbates overall polarisation. Conversely, lower confirmation bias leads to fragmented opinions and limited shifts in individual beliefs. Investing heavily in a high-resource debunking strategy can initially align the population with the debunking agent, but risks rapid resource depletion and diminished long-term influence.
Problem

Research questions and friction points this paper is trying to address.

Analyze opinion formation with LLM agents
Incorporate confirmation bias and resource constraints
Examine polarisation and resource depletion effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents simulate opinion polarization dynamics
Integrate resource optimization in decision-making
Penalize misinformation propagation in simulations
🔎 Similar Papers
No similar papers found.