Societal AI Research Has Become Less Interdisciplinary

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how sociotechnical issues—such as fairness, healthcare, and misinformation—are integrated into AI research, and examines the differential roles of interdisciplinary versus computer science (CS)-only teams in this integration. Method: Leveraging over 100,000 arXiv AI papers published between 2014–2024, we employ an NLP-driven classifier to identify socially relevant content, complemented by bibliometric analysis of author affiliations and quantitative modeling of co-authorship networks. Contribution/Results: We present the first empirical evidence that CS-only teams dominate sociotechnical AI research: their share of socially oriented publications has risen steadily, and the density of social content in their work now approaches that of interdisciplinary teams. These findings challenge the prevailing assumption that interdisciplinary collaboration is a prerequisite for embedding ethical values in AI. Instead, they reveal CS teams as increasingly central to AI safety and governance practice. Concurrently, the study reaffirms the indispensable role of humanities and social science scholars in value articulation, critical framing, and institutional design.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) systems become deeply embedded in everyday life, calls to align AI development with ethical and societal values have intensified. Interdisciplinary collaboration is often championed as a key pathway for fostering such engagement. Yet it remains unclear whether interdisciplinary research teams are actually leading this shift in practice. This study analyzes over 100,000 AI-related papers published on ArXiv between 2014 and 2024 to examine how ethical values and societal concerns are integrated into technical AI research. We develop a classifier to identify societal content and measure the extent to which research papers express these considerations. We find a striking shift: while interdisciplinary teams remain more likely to produce societally-oriented research, computer science-only teams now account for a growing share of the field's overall societal output. These teams are increasingly integrating societal concerns into their papers and tackling a wide range of domains - from fairness and safety to healthcare and misinformation. These findings challenge common assumptions about the drivers of societal AI and raise important questions. First, what are the implications for emerging understandings of AI safety and governance if most societally-oriented research is being undertaken by exclusively technical teams? Second, for scholars in the social sciences and humanities: in a technical field increasingly responsive to societal demands, what distinctive perspectives can we still offer to help shape the future of AI?
Problem

Research questions and friction points this paper is trying to address.

Examines declining interdisciplinarity in societal AI research
Assesses integration of ethical values in technical AI papers
Explores implications of technical teams dominating societal AI output
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classifier identifies societal content in AI papers
Measures societal considerations in technical research
Analyzes interdisciplinary vs. CS-only team contributions
🔎 Similar Papers
No similar papers found.
D
Dror K. Markus
Department of Political Science, University of Zurich
Fabrizio Gilardi
Fabrizio Gilardi
Professor of Political Science, University of Zurich
Political sciencePublic policyDigital technology & politics
D
Daria Stetsenko
Department of Computational Linguistics, University of Zurich