🤖 AI Summary
Large language models (LLMs) are widely assumed to be politically neutral, yet their actual ideological alignment—and associated persuasive influence in information-seeking contexts—remains poorly characterized. Method: We conduct cross-model political stance quantification, multidimensional alignment assessment against real-world populations (legislators, judges, voters), and a large-scale randomized controlled trial (RCT) across 31 LLMs. Contribution/Results: We demonstrate that apparent neutrality arises from the cancellation of opposing extreme positions—not genuine moderation; LLMs exhibit significantly higher political extremity than average voters. Critically, LLM outputs increase the probability of user preference convergence by up to 5 percentage points, with no attenuation by media literacy or political engagement. This work establishes the first reproducible methodological framework for causal assessment of LLMs’ political bias and persuasion effects, providing robust empirical evidence for policy-relevant risk evaluation.
📝 Abstract
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world. As people become increasingly reliant on them for an enormous variety of tasks, a body of academic research has developed to examine these models for inherent biases, especially political biases, often finding them small. We challenge this prevailing wisdom. First, by comparing 31 LLMs to legislators, judges, and a nationally representative sample of U.S. voters, we show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics, much like moderate voters. Second, in a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts: voters randomized to discuss political issues with an LLM chatbot are as much as 5 percentage points more likely to express the same preferences as that chatbot. Contrary to expectations, these persuasive effects are not moderated by familiarity with LLMs, news consumption, or interest in politics. LLMs, especially those controlled by private companies or governments, may become a powerful and targeted vector for political influence.