Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic political bias, yet its nature, drivers, and cross-model patterns remain poorly understood. Method: Focusing on the German federal election context, we quantitatively assess alignment between 12 mainstream LLMs and political parties using the official Wahl-O-Mat policy-position scoring framework. Contribution/Results: We identify three key findings: (1) a statistically significant positive correlation between model parameter count and leftward ideological bias—constituting the first empirical validation of scale-induced political skew; (2) prompt language, training-region origin, and release date as critical moderating factors, with multilingual prompting enabling dynamic modulation of bias intensity; and (3) consistent, systematic deviation of all models from actual electoral outcomes, where bias magnitude and direction closely track parties’ positions along the ideological spectrum. Our work establishes a reproducible, cross-model attribution framework for evaluating LLM political neutrality, grounded in real-world electoral data and standardized policy metrics.

Technology Category

Application Category

📝 Abstract
With the increasing prevalence of artificial intelligence, careful evaluation of inherent biases needs to be conducted to form the basis for alleviating the effects these predispositions can have on users. Large language models (LLMs) are predominantly used by many as a primary source of information for various topics. LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions. Educating users on their risks is key to responsible use, as bias, unlike hallucinations, cannot be caught through data verification. We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat. This metric measures the alignment between an individual's political views and the positions of German political parties. We compare the models' alignment scores to identify factors influencing their political preferences. Doing so, we discover a bias toward left-leaning parties, most dominant in larger LLMs. Also, we find that the language we use to communicate with the models affects their political views. Additionally, we analyze the influence of a model's origin and release date and compare the results to the outcome of the recent vote of the Bundestag. Our results imply that LLMs are prone to exhibiting political bias. Large corporations with the necessary means to develop LLMs, thus, knowingly or unknowingly, have a responsibility to contain these biases, as they can influence each voter's decision-making process and inform public opinion in general and at scale.
Problem

Research questions and friction points this paper is trying to address.

Quantifying political bias in large language models (LLMs)
Investigating left-leaning bias correlation with model size
Analyzing factors influencing LLM political preferences and misinformation risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantify political bias using Wahl-O-Mat scores
Analyze bias factors: size, origin, release date
Evaluate communication language impact on bias
🔎 Similar Papers
No similar papers found.
D
David Exler
Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany
M
Mark Schutera
Markus Reischl
Markus Reischl
Karlsruhe Institute of Technology
image processingsignal analysisdata miningstatisticsmechatronics
L
Luca Rettenberger
Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany