🤖 AI Summary
Current conversational diagnostic systems predominantly rely on text-only interaction, failing to support real-time multimodal clinical analysis—such as medical images, ECGs, and PDF reports—essential for telemedicine. To address this, we propose a state-driven multimodal conversational diagnostic framework built upon Gemini 2.0 Flash, featuring a dynamic state-aware mechanism that jointly enables multimodal understanding, uncertainty modeling, and structured clinical questioning. Crucially, we introduce the first method for autonomously generating follow-up questions based on patient-state uncertainty, emulating expert clinicians’ diagnostic reasoning. Evaluated on 105 OSCE cases, our system significantly outperformed general practitioners across 7 of 9 multimodal and 29 of 32 non-multimodal clinical dimensions—including diagnostic accuracy—demonstrating synergistic enhancement between multimodal capability and diagnostic efficacy.
📝 Abstract
Large Language Models (LLMs) have demonstrated great potential for conducting diagnostic conversations but evaluation has been largely limited to language-only interactions, deviating from the real-world requirements of remote care delivery. Instant messaging platforms permit clinicians and patients to upload and discuss multimodal medical artifacts seamlessly in medical consultation, but the ability of LLMs to reason over such data while preserving other attributes of competent diagnostic conversation remains unknown. Here we advance the conversational diagnosis and management performance of the Articulate Medical Intelligence Explorer (AMIE) through a new capability to gather and interpret multimodal data, and reason about this precisely during consultations. Leveraging Gemini 2.0 Flash, our system implements a state-aware dialogue framework, where conversation flow is dynamically controlled by intermediate model outputs reflecting patient states and evolving diagnoses. Follow-up questions are strategically directed by uncertainty in such patient states, leading to a more structured multimodal history-taking process that emulates experienced clinicians. We compared AMIE to primary care physicians (PCPs) in a randomized, blinded, OSCE-style study of chat-based consultations with patient actors. We constructed 105 evaluation scenarios using artifacts like smartphone skin photos, ECGs, and PDFs of clinical documents across diverse conditions and demographics. Our rubric assessed multimodal capabilities and other clinically meaningful axes like history-taking, diagnostic accuracy, management reasoning, communication, and empathy. Specialist evaluation showed AMIE to be superior to PCPs on 7/9 multimodal and 29/32 non-multimodal axes (including diagnostic accuracy). The results show clear progress in multimodal conversational diagnostic AI, but real-world translation needs further research.