From Incidents to Insights: Patterns of Responsibility following AI Harms

📅 2025-05-07
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates responsibility attribution patterns and the evolution of societal expectations among multiple stakeholders—developers, deployers, victims, the public, and legislators—following AI incidents. Drawing on 962 incidents from the AI Incident Database (AIID) and 4,743 associated media reports, it employs a three-tiered mixed-methods approach: quantitative statistical analysis, qualitative coding, and comparative case studies—yielding the first systematic identification of non-technical responsibility attribution regularities in AI accidents. Results reveal that responsibility identifiability does not imply actual accountability; contextual factors—including actor identity, victim group characteristics, and incident contentiousness—significantly moderate attribution. Crucially, both contentious and non-contentious incidents convey salient institutional learning signals. The study demonstrates that AIID’s core value lies in mapping the dynamic interplay between societal adaptation and governance responses, thereby providing empirical grounding for designing robust AI accountability frameworks and resilience-oriented governance.

Technology Category

Application Category

📝 Abstract
The AI Incident Database was inspired by aviation safety databases, which enable collective learning from failures to prevent future incidents. The database documents hundreds of AI failures, collected from the news and media. However, criticism highlights that the AIID's reliance on media reporting limits its utility for learning about implementation failures. In this paper, we accept that the AIID falls short in its original mission, but argue that by looking beyond technically-focused learning, the dataset can provide new, highly valuable insights: specifically, opportunities to learn about patterns between developers, deployers, victims, wider society, and law-makers that emerge after AI failures. Through a three-tier mixed-methods analysis of 962 incidents and 4,743 related reports from the AIID, we examine patterns across incidents, focusing on cases with public responses tagged in the database. We identify 'typical' incidents found in the AIID, from Tesla crashes to deepfake scams. Focusing on this interplay between relevant parties, we uncover patterns in accountability and social expectations of responsibility. We find that the presence of identifiable responsible parties does not necessarily lead to increased accountability. The likelihood of a response and what it amounts to depends highly on context, including who built the technology, who was harmed, and to what extent. Controversy-rich incidents provide valuable data about societal reactions, including insights into social expectations. Equally informative are cases where controversy is notably absent. This work shows that the AIID's value lies not just in preventing technical failures, but in documenting patterns of harms and of institutional response and social learning around AI incidents. These patterns offer crucial insights for understanding how society adapts to and governs emerging AI technologies.
Problem

Research questions and friction points this paper is trying to address.

Analyzing patterns of responsibility after AI failures
Exploring accountability gaps in AI incident responses
Documenting societal adaptation to AI-related harms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes AI Incident Database for failure analysis
Employs mixed-methods analysis on incident patterns
Focuses on accountability and social expectations
🔎 Similar Papers
No similar papers found.
I
Isabel Richards
The Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
Claire Benn
Claire Benn
University of Cambridge
EthicsTechnologySupererogationAIVirtual Reality
Miri Zilka
Miri Zilka
University of Cambridge
Trustworthy Machine Learning