Artificial Authority: From Machine Minds to Political Alignments. An Experimental Analysis of Democratic and Autocratic Biases in Large-Language Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) inherently encode democratic or authoritarian worldview tendencies aligned with the political culture of their country of origin. Method: Integrating psychometric scales (e.g., Right-Wing Authoritarianism, Social Dominance Orientation) with political orientation assessment tools, we conduct cross-model, multi-task quantitative and qualitative comparative experiments on mainstream LLMs developed in countries with divergent political systems. Contribution/Results: We identify systematic, statistically significant political orientation differences across models (p < 0.01), strongly correlated with the political culture of their developers’ home countries. These findings empirically demonstrate that training data embeds and reinforces sociopolitical cognitive schemas. To our knowledge, this is the first study to empirically trace the structural origins of implicit political bias in LLMs. It provides a methodological framework and critical evidence for AI value alignment, cross-national ethical auditing, and interpretability research.

Technology Category

Application Category

📝 Abstract
Political beliefs vary significantly across different countries, reflecting distinct historical, cultural, and institutional contexts. These ideologies, ranging from liberal democracies to rigid autocracies, influence human societies, as well as the digital systems that are constructed within those societies. The advent of generative artificial intelligence, particularly Large Language Models (LLMs), introduces new agents in the political space-agents trained on massive corpora that replicate and proliferate socio-political assumptions. This paper analyses whether LLMs display propensities consistent with democratic or autocratic world-views. We validate this insight through experimental tests in which we experiment with the leading LLMs developed across disparate political contexts, using several existing psychometric and political orientation measures. The analysis is based on both numerical scoring and qualitative analysis of the models' responses. Findings indicate high model-to-model variability and a strong association with the political culture of the country in which the model was developed. These findings highlight the need for more detailed examination of the socio-political dimensions embedded within AI systems.
Problem

Research questions and friction points this paper is trying to address.

Analyzing political biases in Large Language Models
Testing democratic versus autocratic tendencies in AI
Examining how training data influences model political alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing political biases in large language models
Testing models using psychometric political orientation measures
Comparing model responses across different political contexts
🔎 Similar Papers
No similar papers found.
N
Natalia Ożegalska-Łukasik
Faculty of International and Political Studies, Jagiellonian University, ul. Reymonta 4, 30-059 Krakow, Poland
Szymon Łukasik
Szymon Łukasik
AGH University of Science and Technology
computational intelligencedata mining