Evidence of political bias in search engines and language models before major elections

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically audits political bias across major information platforms ahead of the 2024 European Parliament and U.S. presidential elections, examining its impact on democratic information environments. Integrating ideological mapping with issue-based alignment, the research introduces a novel, cross-platform, and reproducible framework for bias assessment by leveraging agent-based automated queries, natural language processing, and political entity recognition across four major search engines and two large language models in a multi-national electoral context. Findings reveal an overrepresentation of far-right entities in European search results; in the U.S., Google exhibits a Republican-leaning bias while other engines favor Democratic issues; large language models generally demonstrate greater balance but still disproportionately represent far-right and Green Party entities. This work establishes a new paradigm for large-scale, privacy-preserving audits of the information ecosystem.

Technology Category

Application Category

📝 Abstract
Search engines (SEs) and large language models (LLMs) are central to political information access, yet their algorithmic decisions and potential underlying biases remain underexplored. We developed a standardized, privacy-preserving, bot-and-proxy methodology to audit four SEs and two LLMs before the 2024 European Parliament and US presidential elections. We collected answers to approximately 4,360 queries related to elections in five EU countries and 15 US counties, identified political entities and topics in those answers, and mapped them to ideological positions (EU) or issue associations (US). In Europe, SE results disproportionately mentioned far-right entities beyond levels expected from polls, past elections, or media salience. In the US, Google strongly favored topics more important to Republican voters, while other search engines favored issues more relevant to Democrats. LLMs responses were more balanced, although there is evidence of overrepresentation of far-right (and Green) entities. These results show evidence of bias and open important discussions on how even small skews in widely used platforms may influence democratic processes, calling for systematic audits of their outputs.
Problem

Research questions and friction points this paper is trying to address.

political bias
search engines
large language models
elections
algorithmic auditing
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic auditing
political bias detection
privacy-preserving methodology
LLM fairness
search engine bias
🔎 Similar Papers
No similar papers found.
Í
Íris Damião
Social Physics and Complexity Lab - SPAC, LIP - Laboratório de Instrumentação e Física Experimental de Partículas, Lisbon, Portugal; Instituto Superior Técnico - Universidade de Lisboa, Lisbon, Portugal
Paulo Almeida
Paulo Almeida
Mathematics, University of Aveiro
Number TheoryCryptographyCoding TheoryFinite Fields
J
João Franco
Social Physics and Complexity Lab - SPAC, LIP - Laboratório de Instrumentação e Física Experimental de Partículas, Lisbon, Portugal
Nuno Santos
Nuno Santos
INESC-ID, Instituto Superior Técnico, University of Lisbon
Trusted ComputingCloud ComputingSecurityDistributed SystemsOperating Systems
P
Pedro C. Magalhães
Instituto de Ciências Sociais da Universidade de Lisboa, Lisbon, Portugal
J
Joana Gonçalves-Sá
Social Physics and Complexity Lab - SPAC, LIP - Laboratório de Instrumentação e Física Experimental de Partículas, Lisbon, Portugal; NOVA LINCS - NOVA Laboratory for Computer Science and Informatics, NOVA School of Science and Technology, NOVA University Lisbon, Lisbon, Portugal