🤖 AI Summary
This study systematically audits political bias across major information platforms ahead of the 2024 European Parliament and U.S. presidential elections, examining its impact on democratic information environments. Integrating ideological mapping with issue-based alignment, the research introduces a novel, cross-platform, and reproducible framework for bias assessment by leveraging agent-based automated queries, natural language processing, and political entity recognition across four major search engines and two large language models in a multi-national electoral context. Findings reveal an overrepresentation of far-right entities in European search results; in the U.S., Google exhibits a Republican-leaning bias while other engines favor Democratic issues; large language models generally demonstrate greater balance but still disproportionately represent far-right and Green Party entities. This work establishes a new paradigm for large-scale, privacy-preserving audits of the information ecosystem.
📝 Abstract
Search engines (SEs) and large language models (LLMs) are central to political information access, yet their algorithmic decisions and potential underlying biases remain underexplored. We developed a standardized, privacy-preserving, bot-and-proxy methodology to audit four SEs and two LLMs before the 2024 European Parliament and US presidential elections. We collected answers to approximately 4,360 queries related to elections in five EU countries and 15 US counties, identified political entities and topics in those answers, and mapped them to ideological positions (EU) or issue associations (US). In Europe, SE results disproportionately mentioned far-right entities beyond levels expected from polls, past elections, or media salience. In the US, Google strongly favored topics more important to Republican voters, while other search engines favored issues more relevant to Democrats. LLMs responses were more balanced, although there is evidence of overrepresentation of far-right (and Green) entities. These results show evidence of bias and open important discussions on how even small skews in widely used platforms may influence democratic processes, calling for systematic audits of their outputs.