Improving Radiology Report Conciseness and Structure via Local Large Language Models

📅 2024-11-06
🏛️ Journal of imaging informatics in medicine
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Radiology reports are often lengthy and unstructured, impeding referring physicians’ rapid identification of critical imaging findings. To address this, we propose a localized two-stage large language model (LLM) framework: leveraging an on-premises Mixtral-8x7B model, it first compresses redundant textual content via customized multi-turn prompt engineering, then restructures the report anatomically by organ system. The entire pipeline is fully privatized—deployed within the institutional firewall—eliminating cloud dependency while ensuring data security, low computational overhead, and regulatory compliance. Evaluated on 814 real-world clinical reports, our method reduces average redundant word count by over 53%, significantly enhancing readability of key findings and improving clinicians’ information retrieval efficiency. To the best of our knowledge, this is the first work to introduce a Mixtral-driven, on-premises paradigm for structured compression of medical radiology reports.

Technology Category

Application Category

📝 Abstract
Radiology reports are often lengthy and unstructured, posing challenges for referring physicians to quickly identify critical imaging findings while increasing risk of missed information. This retrospective study aimed to enhance radiology reports by making them concise and well-structured, with findings organized by relevant organs. To achieve this, we utilized private large language models (LLMs) deployed locally within our institution's firewall, ensuring data security and minimizing computational costs. Using a dataset of 814 radiology reports from seven board-certified body radiologists at [-blinded for review-], we tested five prompting strategies within the LangChain framework. After evaluating several models, the Mixtral LLM demonstrated superior adherence to formatting requirements compared to alternatives like Llama. The optimal strategy involved condensing reports first and then applying structured formatting based on specific instructions, reducing verbosity while improving clarity. Across all radiologists and reports, the Mixtral LLM reduced redundant word counts by more than 53%. These findings highlight the potential of locally deployed, open-source LLMs to streamline radiology reporting. By generating concise, well-structured reports, these models enhance information retrieval and better meet the needs of referring physicians, ultimately improving clinical workflows.
Problem

Research questions and friction points this paper is trying to address.

Enhancing radiology report conciseness and structure
Reducing redundant word counts by over 53%
Ensuring data security with local LLM deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local private LLMs ensure data security
Mixtral LLM reduces redundancy by 53%
LangChain framework optimizes prompting strategies
🔎 Similar Papers
No similar papers found.
Iryna Hartsock
Iryna Hartsock
Applied postdoctoral fellow at H. Lee Moffitt Cancer Center
Topological data analysisArtificial Intelligence
C
Cyrillo Araujo
Deparmtent of Diagnostic Imaging and Interventional Radiology, Moffitt Cancer Center and Research Institute, Tampa, FL, USA
L
Les Folio
Deparmtent of Diagnostic Imaging and Interventional Radiology, Moffitt Cancer Center and Research Institute, Tampa, FL, USA
G
Ghulam Rasool
Department of Machine Learning, Moffitt Cancer Center and Research Institute, Tampa, FL, USA