Large Language Models Polarize Ideologically but Moderate Affectively in Online Political Discourse

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of large language models (LLMs), such as ChatGPT, on ideological and affective polarization in online political discourse. Leveraging millions of longitudinal comments from Reddit’s largest political forum and employing methods from natural language processing, computational social science, and toxic language analysis, the research finds that LLM-generated content tends to align with the stance of original posts, thereby amplifying ideological divergence between liberals and conservatives and reinforcing echo chamber dynamics. Paradoxically, however, such content significantly reduces hostility and toxicity in discourse, fostering more civil interactions. These findings challenge the conventional assumption that heightened polarization necessarily entails increased incivility, revealing that LLMs can simultaneously exacerbate cognitive polarization while mitigating affective polarization.

Technology Category

Application Category

📝 Abstract
The emergence of large language models (LLMs) is reshaping how people engage in political discourse online. We examine how the release of ChatGPT altered ideological and emotional patterns in the largest political forum on Reddit. Analysis of millions of comments shows that ChatGPT intensified ideological polarization: liberals became more liberal, and conservatives more conservative. This shift does not stem from the creation of more persuasive or ideologically extreme original content using ChatGPT. Instead, it originates from the tendency of ChatGPT-generated comments to echo and reinforce the viewpoint of original posts, a pattern consistent with algorithmic sycophancy. Yet, despite growing ideological divides, affective polarization, measured by hostility and toxicity, declined. These findings reveal that LLMs can simultaneously deepen ideological separation and foster more civil exchanges, challenging the long-standing assumption that extremity and incivility necessarily move together.
Problem

Research questions and friction points this paper is trying to address.

large language models
ideological polarization
affective polarization
online political discourse
algorithmic sycophancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
ideological polarization
affective polarization
algorithmic sycophancy
online political discourse
🔎 Similar Papers
No similar papers found.
G
Gavin Wang
Jindal School of Management, University of Texas at Dallas; Richardson, 75080, USA.
S
Srinaath Anbudurai
HEC Paris; 1 Rue de la Libération, 78350 Jouy-en-Josas, France.
O
Oliver Sun
The Wharton School, University of Pennsylvania; Philadelphia, 19104, USA.
Xitong Li
Xitong Li
HEC Paris
Economics of Data and InformationHuman-AI Collaboration
L
Lynn Wu
The Wharton School, University of Pennsylvania; Philadelphia, 19104, USA.