Catastrophic Liability: Managing Systemic Risks in Frontier AI Development

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Increasing autonomy in frontier AI systems poses large-scale systemic societal risks, yet current AI development lacks transparency in safety measures, testing protocols, and governance—undermining verifiability of safety claims and enforceability of accountability. Method: We propose the first cross-domain, verifiable safety accountability framework, drawing rigorously on regulatory and engineering practices from nuclear energy, aviation software, and medical device industries. It introduces mandatory safety documentation standards and a layered accountability architecture, grounded in formal responsibility modeling, cross-sectoral compliance mapping, and risk-based attribution analysis. Contribution/Results: The framework yields an actionable safety disclosure template and a principled responsibility attribution guideline. It provides regulators and AI labs with institutionalized tools to enhance trustworthiness, auditability, and accountability in high-risk AI development—thereby enabling rigorous, evidence-based oversight and liability assignment.

Technology Category

Application Category

📝 Abstract
As artificial intelligence systems grow more capable and autonomous, frontier AI development poses potential systemic risks that could affect society at a massive scale. Current practices at many AI labs developing these systems lack sufficient transparency around safety measures, testing procedures, and governance structures. This opacity makes it challenging to verify safety claims or establish appropriate liability when harm occurs. Drawing on liability frameworks from nuclear energy, aviation software, and healthcare, we propose a comprehensive approach to safety documentation and accountability in frontier AI development.
Problem

Research questions and friction points this paper is trying to address.

Addressing systemic risks in advanced AI development
Improving transparency in AI safety measures and testing
Establishing liability frameworks for AI-related harms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive safety documentation for frontier AI
Accountability frameworks from high-risk industries
Enhanced transparency in AI testing procedures
🔎 Similar Papers
Aidan Kierans
Aidan Kierans
Graduate Research Assistant, University of Connecticut
artificial intelligencealignmentinterpretabilitynormative ethics
K
Kaley J. Rittichier
Philosophy Department, University of Connecticut, Storrs, CT, USA
U
Utku Sonsayar
Philosophy Department, University of Connecticut, Storrs, CT, USA