🤖 AI Summary
This study investigates the impact of large language models (LLMs), such as ChatGPT, on ideological and affective polarization in online political discourse. Leveraging millions of longitudinal comments from Reddit’s largest political forum and employing methods from natural language processing, computational social science, and toxic language analysis, the research finds that LLM-generated content tends to align with the stance of original posts, thereby amplifying ideological divergence between liberals and conservatives and reinforcing echo chamber dynamics. Paradoxically, however, such content significantly reduces hostility and toxicity in discourse, fostering more civil interactions. These findings challenge the conventional assumption that heightened polarization necessarily entails increased incivility, revealing that LLMs can simultaneously exacerbate cognitive polarization while mitigating affective polarization.
📝 Abstract
The emergence of large language models (LLMs) is reshaping how people engage in political discourse online. We examine how the release of ChatGPT altered ideological and emotional patterns in the largest political forum on Reddit. Analysis of millions of comments shows that ChatGPT intensified ideological polarization: liberals became more liberal, and conservatives more conservative. This shift does not stem from the creation of more persuasive or ideologically extreme original content using ChatGPT. Instead, it originates from the tendency of ChatGPT-generated comments to echo and reinforce the viewpoint of original posts, a pattern consistent with algorithmic sycophancy. Yet, despite growing ideological divides, affective polarization, measured by hostility and toxicity, declined. These findings reveal that LLMs can simultaneously deepen ideological separation and foster more civil exchanges, challenging the long-standing assumption that extremity and incivility necessarily move together.