Mind the (Belief) Gap: Group Identity in the World of LLMs

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical problem: large language models (LLMs) exhibit significantly higher belief congruence—i.e., preferential alignment with socially reinforced beliefs—than humans in multi-agent social simulations, exacerbating misinformation diffusion and impairing knowledge updating. Methodologically, we systematically adapt classical social-psychological belief congruence theory to LLM behavioral analysis and propose three empirically grounded intervention strategies: contact-based exposure, accuracy priming, and global-citizenship framing. We evaluate these using multi-agent simulation and established social-psychology experimental paradigms. Results demonstrate substantial mitigation of belief-driven bias: the optimal intervention reduces misinformation propagation by 37% and improves LLM knowledge update efficiency by 11%. Our core contribution is a novel, interpretable theoretical framework for LLM social-cognitive biases and a cross-disciplinary, evidence-based intervention pipeline bridging computational linguistics and social psychology.

Technology Category

Application Category

📝 Abstract
Social biases and belief-driven behaviors can significantly impact Large Language Models (LLMs) decisions on several tasks. As LLMs are increasingly used in multi-agent systems for societal simulations, their ability to model fundamental group psychological characteristics remains critical yet under-explored. In this study, we present a multi-agent framework that simulates belief congruence, a classical group psychology theory that plays a crucial role in shaping societal interactions and preferences. Our findings reveal that LLMs exhibit amplified belief congruence compared to humans, across diverse contexts. We further investigate the implications of this behavior on two downstream tasks: (1) misinformation dissemination and (2) LLM learning, finding that belief congruence in LLMs increases misinformation dissemination and impedes learning. To mitigate these negative impacts, we propose strategies inspired by: (1) contact hypothesis, (2) accuracy nudges, and (3) global citizenship framework. Our results show that the best strategies reduce misinformation dissemination by up to 37% and enhance learning by 11%. Bridging social psychology and AI, our work provides insights to navigate real-world interactions using LLMs while addressing belief-driven biases.
Problem

Research questions and friction points this paper is trying to address.

LLMs exhibit amplified belief congruence compared to humans.
Belief congruence in LLMs increases misinformation dissemination.
Belief congruence in LLMs impedes learning in multi-agent systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework simulates belief congruence.
Strategies reduce misinformation, enhance learning.
Contact hypothesis, accuracy nudges, global citizenship applied.
🔎 Similar Papers
No similar papers found.