🤖 AI Summary
Quantifying confidence in top-level safety claims for state-of-the-art AI systems remains challenging, particularly in high-stakes domains such as network abuse risk.
Method: Grounded in the Assurance 2.0 framework, this work establishes an explainable and reproducible confidence assessment pipeline. It innovatively implements the Delphi method using large language models (LLMs) alone to automate confidence estimation at argument leaf nodes; introduces a defect rebuttal prioritization mechanism to enhance review efficiency; and designs a decision-maker–oriented confidence visualization paradigm for effective assurance communication.
Results: Empirical evaluation demonstrates the feasibility and practical boundaries of numerical confidence quantification, significantly improving the transparency and reproducibility of safety cases. The proposed end-to-end toolchain—spanning confidence modeling, assessment, and stakeholder communication—represents the first deployable solution bridging AI developers and regulators in rigorous assurance engineering.
📝 Abstract
Powerful new frontier AI technologies are bringing many benefits to society but at the same time bring new risks. AI developers and regulators are therefore seeking ways to assure the safety of such systems, and one promising method under consideration is the use of safety cases. A safety case presents a structured argument in support of a top-level claim about a safety property of the system. Such top-level claims are often presented as a binary statement, for example"Deploying the AI system does not pose unacceptable risk". However, in practice, it is often not possible to make such statements unequivocally. This raises the question of what level of confidence should be associated with a top-level claim. We adopt the Assurance 2.0 safety assurance methodology, and we ground our work by specific application of this methodology to a frontier AI inability argument that addresses the harm of cyber misuse. We find that numerical quantification of confidence is challenging, though the processes associated with generating such estimates can lead to improvements in the safety case. We introduce a method for better enabling reproducibility and transparency in probabilistic assessment of confidence in argument leaf nodes through a purely LLM-implemented Delphi method. We propose a method by which AI developers can prioritise, and thereby make their investigation of argument defeaters more efficient. Proposals are also made on how best to communicate confidence information to executive decision-makers.