Propaganda is all you need

📅 2024-09-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates implicit political biases emerging during the alignment of large language models (LLMs), revealing how alignment techniques—particularly prompt engineering—systematically distort the geometric relationships among political concepts in embedding space, thereby reinforcing specific ideological orientations. Methodologically, it pioneers the integration of Marxist “hegemonic ideology” theory into LLM alignment analysis, developing a quantifiable political bias assessment framework that combines embedding-space geometry, prompt-driven controlled experiments, and social semantic distance modeling. Empirical results demonstrate a pronounced liberal–neoliberal skew across mainstream LLM embedding spaces. The study introduces a reproducible pipeline for detecting political bias and explicates the latent coupling between technical alignment mechanisms and extant sociopolitical power structures. Its contributions constitute a novel socio-technical critical paradigm for AI governance—one that bridges rigorous theoretical grounding with actionable empirical methodology.

Technology Category

Application Category

📝 Abstract
As Machine Learning (ML) is still a recent field of study, especially outside the realm of abstract Mathematics and Computer Science, few works have been conducted on the political aspect of large Language Models (LLMs), and more particularly about the alignment process and its political dimension. This process can be as simple as prompt engineering but is also very complex and can affect completely unrelated notions. For example, politically directed alignment has a very strong impact on an LLM's embedding space and the relative position of political notions in such a space. Using special tools to evaluate general political bias and analyze the effects of alignment, we can gather new data to understand its causes and possible consequences on society. Indeed, by taking a socio-political approach, we can hypothesize that most big LLMs are aligned with what Marxist philosophy calls the 'dominant ideology.' As AI's role in political decision-making, at the citizen's scale but also in government agencies, such biases can have huge effects on societal change, either by creating new and insidious pathways for societal uniformity or by allowing disguised extremist views to gain traction among the people.
Problem

Research questions and friction points this paper is trying to address.

Investigates political bias in large Language Models alignment
Analyzes societal impact of AI-driven political decision-making biases
Explores Marxist 'dominant ideology' influence on LLM embedding spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating political bias in LLMs
Analyzing alignment effects on embeddings
Socio-political approach to AI bias
🔎 Similar Papers
No similar papers found.
P
Paul Kronlund-Drouault
École Normale Supérieure de Lyon