Thousands of AI Authors on the Future of AI

📅 2024-01-05
🏛️ arXiv.org
📈 Citations: 59
Influential: 3
📄 PDF
🤖 AI Summary
This study addresses critical open questions regarding the pace of AI progress, the boundaries of advanced AI capabilities, and long-term existential risks. We conducted the largest prospective survey of AI researchers to date (N=2778), targeting authors from top-tier conferences, employing a structured questionnaire and aggregate forecasting methodology. Our analysis systematically estimates timelines for AI capability milestones, the feasibility of full occupational automation, and the probability and severity of superintelligent AI risks. Key findings include a significant acceleration in expert expectations: the median predicted year when AI will surpass humans across all tasks has advanced by 13 years relative to last year’s forecast; there is a 50% probability that AI will autonomously build websites, compose original music, and fine-tune LLMs by 2028 or earlier. Over half of respondents rated six categories of AI-related harms—including extinction risk—as “major” or “extreme” concerns, yielding a strong consensus that AI risk mitigation research warrants substantially higher priority.

Technology Category

Application Category

📝 Abstract
In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that"substantial"or"extreme"concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.
Problem

Research questions and friction points this paper is trying to address.

Surveying AI researchers' predictions on AI progress milestones and timelines
Assessing probabilities of AI surpassing human capabilities across various tasks
Evaluating expert opinions on AI risks including extinction and societal impacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale survey of AI researchers' predictions
Forecasting AI milestones and automation timelines
Assessing risks and priorities in AI development
🔎 Similar Papers
No similar papers found.
Katja Grace
Katja Grace
AI Impacts, Berkeley, California, United States
H
Harlan Stewart
AI Impacts, Berkeley, California, United States
J
J. F. Sandkühler
Department of Psychology, University of Bonn, Germany
S
Stephen Thomas
AI Impacts, Berkeley, California, United States
Benjamin Weinstein-Raun
Benjamin Weinstein-Raun
Palisade Research, AI Impacts
Artificial IntelligenceRoboticsData StructuresDistributed SystemsCryptography
Jan Brauner
Jan Brauner
Department of Computer Science, University of Oxford, United Kingdom