🤖 AI Summary
This paper investigates how large language model (LLM)-based agents influence opinion polarization in non-cooperative social games, focusing on three mechanisms: confirmation bias, resource constraints, and influence penalties. Method: We propose the first computational framework integrating social psychology and game theory, modeling LLM agents as bounded-rational actors endowed with cognitive biases, finite communication resources, and transmission costs, enabling quantitative analysis of their opinion-intervention dynamics. Contribution/Results: We uncover a non-monotonic relationship between confirmation bias strength and polarization: moderate bias fosters intra-group consensus, whereas excessive bias exacerbates global polarization. Moreover, high-cost fact-checking, though effective short-term, accelerates resource depletion and erodes long-term influence. These findings establish a computationally tractable, intervention-aware theoretical foundation and empirical basis for AI-driven public opinion governance.
📝 Abstract
We introduce a novel non-cooperative game to analyse opinion formation and resistance, incorporating principles from social psychology such as confirmation bias, resource constraints, and influence penalties. Our simulation features Large Language Model (LLM) agents competing to influence a population, with penalties imposed for generating messages that propagate or counter misinformation. This framework integrates resource optimisation into the agents' decision-making process. Our findings demonstrate that while higher confirmation bias strengthens opinion alignment within groups, it also exacerbates overall polarisation. Conversely, lower confirmation bias leads to fragmented opinions and limited shifts in individual beliefs. Investing heavily in a high-resource debunking strategy can initially align the population with the debunking agent, but risks rapid resource depletion and diminished long-term influence.