🤖 AI Summary
This study addresses the low collaborative efficacy and ill-defined safety boundaries between AI systems and clinicians in medical diagnostics. We propose a delegation-based autonomous decision-support framework for pathology diagnosis, enabling conditional AI autonomy—full autonomy for cases meeting reliability thresholds, and human-AI collaboration otherwise—governed by dynamic delegation criteria. Methodologically, the framework integrates dynamically adaptive delegation rules, quantitative AI reliability assessment, ensemble histopathology AI models, and orchestrated human-AI workflow orchestration. Our key contribution is the first verifiable, tiered autonomy mechanism that transcends static collaboration paradigms, rigorously balancing safety, diagnostic efficiency, and AI capability limits. Experimental evaluation demonstrates significant reduction in clinical review time and improved overall diagnostic performance. Moreover, the framework provides a structured, auditable pathway for regulatory approval (e.g., FDA/CE) of graded autonomy in clinical AI systems.
📝 Abstract
In this paper we propose an advanced approach to integrating artificial intelligence (AI) into healthcare: autonomous decision support. This approach allows the AI algorithm to act autonomously for a subset of patient cases whilst serving a supportive role in other subsets of patient cases based on defined delegation criteria. By leveraging the complementary strengths of both humans and AI, it aims to deliver greater overall performance than existing human-AI teaming models. It ensures safe handling of patient cases and potentially reduces clinician review time, whilst being mindful of AI tool limitations. After setting the approach within the context of current human-AI teaming models, we outline the delegation criteria and apply them to a specific AI-based tool used in histopathology. The potential impact of the approach and the regulatory requirements for its successful implementation are then discussed.