Dark Speculation: Combining Qualitative and Quantitative Understanding in Frontier AI Risk Analysis

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantifying catastrophic AI risks remains intractable due to “deep ambiguity”—the absence of precedent, difficulty in defining the “space of catastrophic events,” and extreme uncertainty. Method: This paper introduces *dark speculation*, a method that decouples speculative scenario generation from actuarial underwriting analysis; concurrently constructs rich, causally grounded narratives across multiple risk domains—including mechanistic pathways and mitigation strategies—and integrates scenario planning with iterative institutional design via a simplified Lévy stochastic framework. Contribution/Results: By separating speculation from probabilistic calibration and enhancing narrative thickness, the approach mitigates Lucretian cognitive bias (i.e., neglect of unprecedented low-probability events), enabling meaningful estimation of tail-event probability distributions. The framework improves systemic assessment rigor under deep uncertainty, curbs unwarranted optimism or alarmism, and yields actionable summary statistics to inform AI safety governance and policy decisions.

Technology Category

Application Category

📝 Abstract
Estimating catastrophic harms from frontier AI is hindered by deep ambiguity: many of its risks are not only unobserved but unanticipated by analysts. The central limitation of current risk analysis is the inability to populate the $ extit{catastrophic event space}$, or the set of potential large-scale harms to which probabilities might be assigned. This intractability is worsened by the $ extit{Lucretius problem}$, or the tendency to infer future risks only from past experience. We propose a process of $ extit{dark speculation}$, in which systematically generating and refining catastrophic scenarios ("qualitative" work) is coupled with estimating their likelihoods and associated damages (quantitative underwriting analysis). The idea is neither to predict the future nor to enable insurance for its own sake, but to use narrative and underwriting tools together to generate probability distributions over outcomes. We formalize this process using a simplified catastrophic Lévy stochastic framework and propose an iterative institutional design in which (1) speculation (including scenario planning) generates detailed catastrophic event narratives, (2) insurance underwriters assign probabilistic and financial parameters to these narratives, and (3) decision-makers synthesize the results into summary statistics to inform judgment. Analysis of the model reveals the value of (a) maintaining independence between speculation and underwriting, (b) analyzing multiple risk categories in parallel, and (c) generating "thick" catastrophic narrative rich in causal (counterfactual) and mitigative detail. While the approach cannot eliminate deep ambiguity, it offers a systematic approach to reason about extreme, low-probability events in frontier AI, tempering complacency and overreaction. The framework is adaptable for iterative use and can further augmented with AI systems.
Problem

Research questions and friction points this paper is trying to address.

Estimating catastrophic AI risks hindered by deep ambiguity and unanticipated events.
Overcoming the Lucretius problem limiting risk inference to past experiences.
Developing systematic scenario generation and probabilistic analysis for extreme AI outcomes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining qualitative scenario generation with quantitative underwriting analysis
Using a simplified catastrophic Lévy stochastic framework for formalization
Proposing an iterative institutional design with independent speculation and underwriting
🔎 Similar Papers
No similar papers found.