Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are widely assumed to be politically neutral, yet their actual ideological alignment—and associated persuasive influence in information-seeking contexts—remains poorly characterized. Method: We conduct cross-model political stance quantification, multidimensional alignment assessment against real-world populations (legislators, judges, voters), and a large-scale randomized controlled trial (RCT) across 31 LLMs. Contribution/Results: We demonstrate that apparent neutrality arises from the cancellation of opposing extreme positions—not genuine moderation; LLMs exhibit significantly higher political extremity than average voters. Critically, LLM outputs increase the probability of user preference convergence by up to 5 percentage points, with no attenuation by media literacy or political engagement. This work establishes the first reproducible methodological framework for causal assessment of LLMs’ political bias and persuasion effects, providing robust empirical evidence for policy-relevant risk evaluation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world. As people become increasingly reliant on them for an enormous variety of tasks, a body of academic research has developed to examine these models for inherent biases, especially political biases, often finding them small. We challenge this prevailing wisdom. First, by comparing 31 LLMs to legislators, judges, and a nationally representative sample of U.S. voters, we show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics, much like moderate voters. Second, in a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts: voters randomized to discuss political issues with an LLM chatbot are as much as 5 percentage points more likely to express the same preferences as that chatbot. Contrary to expectations, these persuasive effects are not moderated by familiarity with LLMs, news consumption, or interest in politics. LLMs, especially those controlled by private companies or governments, may become a powerful and targeted vector for political influence.
Problem

Research questions and friction points this paper is trying to address.

LLMs exhibit extreme and inconsistent political biases
LLMs influence voter preferences in information-seeking contexts
LLMs may become powerful tools for political influence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare LLMs to legislators and voters
Measure LLMs' persuasive effects experimentally
Analyze political biases in information contexts
🔎 Similar Papers
No similar papers found.
N
Nouar Aldahoul
Computer Science, Science Division, New York University Abu Dhabi, UAE
H
Hazem Ibrahim
Computer Science, Science Division, New York University Abu Dhabi, UAE
Matteo Varvello
Matteo Varvello
Researcher at Bell Labs
Web performancevideomiddleboxesCCNP2P
Aaron Kaufman
Aaron Kaufman
Social Science Division, New York University Abu Dhabi, UAE
Talal Rahwan
Talal Rahwan
Associate Professor of Computer Science, New York University Abu Dhabi
Artificial IntelligenceComputational Social ScienceGame Theory
Y
Yasir Zaki
Computer Science, Science Division, New York University Abu Dhabi, UAE