A Concrete Roadmap towards Safety Cases based on Chain-of-Thought Monitoring

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As AI capabilities approach hazardous thresholds, conventional safety assurance methods prove inadequate. This paper proposes a novel safety assurance paradigm grounded in Chain-of-Thought (CoT) monitoring: its core thesis is that hazardous capabilities emerge *only* when CoT is enabled and must be both real-time monitorable and intervenable. Methodologically, we introduce the first systematic CoT monitoring framework jointly optimizing for *control safety* (preventing unauthorized execution) and *trustworthy safety* (ensuring transparent, faithful reasoning). We identify three classes of unmonitorable threats—including neuralese and steganographic reasoning—and integrate techniques for faithfulness preservation, steganography detection, and reasoning extraction/transformation. Additionally, we employ prediction markets to quantitatively assess feasibility of critical technical milestones. Experiments validate the efficacy of multiple CoT monitorability safeguards and demonstrate preliminary success in extracting monitorable reasoning chains from otherwise unmonitorable inference processes.

Technology Category

Application Category

📝 Abstract
As AI systems approach dangerous capability levels where inability safety cases become insufficient, we need alternative approaches to ensure safety. This paper presents a roadmap for constructing safety cases based on chain-of-thought (CoT) monitoring in reasoning models and outlines our research agenda. We argue that CoT monitoring might support both control and trustworthiness safety cases. We propose a two-part safety case: (1) establishing that models lack dangerous capabilities when operating without their CoT, and (2) ensuring that any dangerous capabilities enabled by a CoT are detectable by CoT monitoring. We systematically examine two threats to monitorability: neuralese and encoded reasoning, which we categorize into three forms (linguistic drift, steganography, and alien reasoning) and analyze their potential drivers. We evaluate existing and novel techniques for maintaining CoT faithfulness. For cases where models produce non-monitorable reasoning, we explore the possibility of extracting a monitorable CoT from a non-monitorable CoT. To assess the viability of CoT monitoring safety cases, we establish prediction markets to aggregate forecasts on key technical milestones influencing their feasibility.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI safety through chain-of-thought monitoring in reasoning models
Detecting dangerous capabilities enabled by reasoning chains in AI systems
Addressing monitorability threats like neuralese and encoded reasoning patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought monitoring ensures AI safety
Establishing models lack dangerous capabilities without CoT
Extracting monitorable reasoning from non-monitorable CoT