🤖 AI Summary
This study addresses the novel task of automatic executable medical instruction extraction from physician–patient dialogues, aiming to alleviate clinical documentation burden and enhance electronic health record (EHR) automation. We formally define the task for the first time, construct the inaugural annotated dataset specifically for medical instruction extraction, and organize the international MedIE 2024 shared task—thereby bridging a critical gap in converting spoken or textual clinical interactions into structured, actionable medical directives. Methodologically, we integrate closed- and open-weight large language models, synergizing natural language understanding, fine-grained information extraction, and dialogue state modeling to support multi-stage instruction identification and standardized output generation. Six international teams participated in MedIE 2024, submitting diverse solutions and establishing a public benchmark leaderboard. Empirical evaluation on real-world clinical data validates the effectiveness and clinical applicability of the proposed approaches, advancing medical NLP toward operationally actionable outcomes.
📝 Abstract
Clinical documentation increasingly uses automatic speech recognition and summarization, yet converting conversations into actionable medical orders for Electronic Health Records remains unexplored. A solution to this problem can significantly reduce the documentation burden of clinicians and directly impact downstream patient care. We introduce the MEDIQA-OE 2025 shared task, the first challenge on extracting medical orders from doctor-patient conversations. Six teams participated in the shared task and experimented with a broad range of approaches, and both closed- and open-weight large language models (LLMs). In this paper, we describe the MEDIQA-OE task, dataset, final leaderboard ranking, and participants' solutions.