What are AI researchers worried about?

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the alignment between public and technology company discourse on AI risks and the actual concerns of AI researchers. Through a large-scale survey of over 4,000 AI researchers—including both structured responses and open-ended textual analysis—and a cross-sectional comparison with existing public opinion data, the research reveals that only 3% of AI researchers express high concern about existential AI risks, substantially challenging prevailing narratives. The findings demonstrate strong convergence between researchers and the public regarding priorities for concrete sociotechnical risks—such as algorithmic bias, malicious use, and labor market disruption. The authors argue that policy and public discourse should shift focus toward these tangible, actionable risks, offering empirical grounding for more effective AI governance.

Technology Category

Application Category

📝 Abstract
As AI attracts vast investment and attention, there are competing concerns about the technology's opportunities and uncertainties that blend technical and social questions. The public debate, dominated by a few powerful voices, tends to highlight extreme promises and threats. We wanted to know whether public discussions or technology companies'priorities were representative of AI researchers'opinions. Our survey of more than 4,000 AI researchers is, we think, the largest conducted to date. It was designed to understand attitudes to a variety of issues and include some comparisons with public attitudes derived from existing surveys. Most previous surveys of AI researchers have asked them for predictions of passing a technological threshold or the probabilities of some catastrophic event. These surveys mask the uncertainty and diversity that normally characterises scientific research. Our hypothesis was that the opinions of AI researchers would be markedly different from those of members of the public. While there are areas of divergence, particularly in attitudes to the technology's potential benefits, our survey shows some surprising convergence between researchers'and publics'opinions, particularly in the assessment and prioritisation of risk. Responses to an open text question'What one thing most worries you about AI?'reveal that only 3% of AI researchers prioritise existential risks, despite the prominence given to these risks in media and policy. AI technologies and AI researchers seem to be much more'normal'than public representations suggest. Our survey results suggest the possibility for new forms of public dialogue on AI's harms, risks and opportunities. Rather than speculating on future potential risks, policymakers and AI researchers should collaborate on understanding and mitigating the range of sociotechnical risks that are already of clear public concern.
Problem

Research questions and friction points this paper is trying to address.

AI researchers
public perception
existential risk
sociotechnical risks
AI concerns
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI researcher survey
risk perception
public attitudes
existential risk
sociotechnical risks
🔎 Similar Papers
No similar papers found.
C
Cian O'Donovan
Department of Science and Technology Studies, University College London, United Kingdom
S
Sarp Gurakan
Department of Science and Technology Studies, University College London, United Kingdom
A
Ananya Karanam
Department of Science and Technology Studies, University College London, United Kingdom
Xiaomeng Wu
Xiaomeng Wu
NTT Corporation
Image ProcessingInformation RetrievalMultimediaPattern Recognition
Jack Stilgoe
Jack Stilgoe
Professor, University College London
Science PolicyScience and Technology Studies