🤖 AI Summary
This study investigates geographically and politically heterogeneous public perceptions of AI risks to advance inclusive, responsible AI governance. Employing a cross-lingual corpus of mainstream media coverage from 27 countries across six continents, we integrate multilingual topic modeling, NLP-driven annotation of risk categories, and quantitative measurement of media political orientation to conduct the first systematic comparative analysis of national AI risk narratives. Results reveal that societal risks constitute the highest global priority, followed by legal and rights-related risks; critically, political orientation significantly modulates risk prioritization—left-leaning outlets emphasize social equity and labor displacement, whereas right-leaning outlets foreground content safety and sovereignty concerns. By transcending technocentric paradigms, this work provides empirically grounded, pluralistic insights essential for designing participatory, evidence-based AI governance frameworks.
📝 Abstract
Emerging AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society. To mitigate these risks, governments, companies, and researchers have contributed regulatory frameworks, risk assessment approaches, and safety benchmarks, but these can lack nuance when considered in global deployment contexts. One way to understand these nuances is by looking at how the media reports on AI, as news media has a substantial influence on what negative impacts of AI are discussed in the public sphere and which impacts are deemed important. In this work, we analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania to gain valuable insights into the risks and harms of AI technologies as reported and prioritized across media outlets in different countries. This approach reveals a skewed prioritization of Societal Risks followed by Legal&Rights-related Risks, Content Safety Risks, Cognitive Risks, Existential Risks, and Environmental Risks, as reflected in the prevalence of these risk categories in the news coverage of different nations. Furthermore, it highlights how the distribution of such concerns varies based on the political bias of news outlets, underscoring the political nature of AI risk assessment processes and public opinion. By incorporating views from various regions and political orientations for assessing the risks and harms of AI, this work presents stakeholders, such as AI developers and policy makers, with insights into the AI risks categories prioritized in the public sphere. These insights my guide the development of more inclusive, safe, and responsible AI technologies that address the diverse concerns and needs across the world.