🤖 AI Summary
To address the insufficient reliability of LLM-as-Judges (LaJ) in safety-critical domains—such as healthcare and nuclear facility operations—this paper proposes a risk-controllable human-in-the-loop evaluation framework. Methodologically, it introduces: (1) the first safety evidence framework explicitly designed for LaJ; (2) a context-aware error severity-weighted metric basket, replacing single-threshold judgment criteria; and (3) a consistency-based confidence measurement with configurable thresholds to dynamically trigger human review. Evaluated on simulated clinical triage and nuclear facility scheduling tasks, the framework reduces high-risk misjudgment rates by 62% while sustaining a 91% automation rate. These results demonstrate a significant improvement in the safety–efficiency trade-off, enabling more trustworthy deployment of LaJ in mission-critical applications.
📝 Abstract
LLMs (Large Language Models) are increasingly used in text processing pipelines to intelligently respond to a variety of inputs and generation tasks. This raises the possibility of replacing human roles that bottleneck existing information flows, either due to insufficient staff or process complexity. However, LLMs make mistakes and some processing roles are safety critical. For example, triaging post-operative care to patients based on hospital referral letters, or updating site access schedules in nuclear facilities for work crews. If we want to introduce LLMs into critical information flows that were previously performed by humans, how can we make them safe and reliable? Rather than make performative claims about augmented generation frameworks or graph-based techniques, this paper argues that the safety argument should focus on the type of evidence we get from evaluation points in LLM processes, particularly in frameworks that employ LLM-as-Judges (LaJ) evaluators. This paper argues that although we cannot get deterministic evaluations from many natural language processing tasks, by adopting a basket of weighted metrics it may be possible to lower the risk of errors within an evaluation, use context sensitivity to define error severity and design confidence thresholds that trigger human review of critical LaJ judgments when concordance across evaluators is low.