🤖 AI Summary
This study addresses the misalignment between benchmark performance and real-world safety/reliability of medical large language models (LLMs) in clinical practice. We propose an autonomy-level assessment framework (L0–L3), inspired by autonomous driving taxonomy, which maps clinical tasks—ranging from information assistance and integration to decision support and agent-based execution—to corresponding risk levels. Each level is rigorously defined by operational boundaries and quantifiable fault-tolerance thresholds. By integrating established benchmarks with clinical risk dimensions, we construct an interpretable, regulatory-aware, hierarchical evaluation standard. Our key contribution is the first evidence-generation pathway bridging laboratory benchmarks to clinically trustworthy LLM deployment: it explicitly links evaluation outcomes to regulatory requirements and risk mitigation strategies, thereby providing a practical, actionable methodology for safe, responsible clinical adoption of medical LLMs.
📝 Abstract
Medical Large language models achieve strong scores on standard benchmarks; however, the transfer of those results to safe and reliable performance in clinical workflows remains a challenge. This survey reframes evaluation through a levels-of-autonomy lens (L0-L3), spanning informational tools, information transformation and aggregation, decision support, and supervised agents. We align existing benchmarks and metrics with the actions permitted at each level and their associated risks, making the evaluation targets explicit. This motivates a level-conditioned blueprint for selecting metrics, assembling evidence, and reporting claims, alongside directions that link evaluation to oversight. By centering autonomy, the survey moves the field beyond score-based claims toward credible, risk-aware evidence for real clinical use.