Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Balancing patient privacy protection and model performance remains challenging in multi-abnormality classification for radiology reports. Method: We propose DP-LoRA—the first differential privacy (DP)-enhanced LoRA fine-tuning framework tailored for medical text—integrating BERT-medium/small and ALBERT-base architectures under a strict privacy budget (ε = 1.0), jointly trained and evaluated on MIMIC-CXR and CT-RATE. Results: DP-LoRA achieves a weighted F1 of 0.88 on MIMIC-CXR (only 0.02 below the non-private baseline) and 0.59 on CT-RATE (a 0.14 improvement over prior DP methods), substantially outperforming existing DP fine-tuning approaches. This work provides the first systematic quantification of the privacy–utility trade-off in fine-tuning large language models for medical applications, thereby bridging a critical gap in compliant, high-fidelity clinical AI modeling.

Technology Category

Application Category

📝 Abstract
Purpose: This study proposes a framework for fine-tuning large language models (LLMs) with differential privacy (DP) to perform multi-abnormality classification on radiology report text. By injecting calibrated noise during fine-tuning, the framework seeks to mitigate the privacy risks associated with sensitive patient data and protect against data leakage while maintaining classification performance. Materials and Methods: We used 50,232 radiology reports from the publicly available MIMIC-CXR chest radiography and CT-RATE computed tomography datasets, collected between 2011 and 2019. Fine-tuning of LLMs was conducted to classify 14 labels from MIMIC-CXR dataset, and 18 labels from CT-RATE dataset using Differentially Private Low-Rank Adaptation (DP-LoRA) in high and moderate privacy regimes (across a range of privacy budgets = {0.01, 0.1, 1.0, 10.0}). Model performance was evaluated using weighted F1 score across three model architectures: BERT-medium, BERT-small, and ALBERT-base. Statistical analyses compared model performance across different privacy levels to quantify the privacy-utility trade-off. Results: We observe a clear privacy-utility trade-off through our experiments on 2 different datasets and 3 different models. Under moderate privacy guarantees the DP fine-tuned models achieved comparable weighted F1 scores of 0.88 on MIMIC-CXR and 0.59 on CT-RATE, compared to non-private LoRA baselines of 0.90 and 0.78, respectively. Conclusion: Differentially private fine-tuning using LoRA enables effective and privacy-preserving multi-abnormality classification from radiology reports, addressing a key challenge in fine-tuning LLMs on sensitive medical data.
Problem

Research questions and friction points this paper is trying to address.

Develop DP-powered LLMs for radiology report classification
Mitigate privacy risks in sensitive patient data processing
Balance privacy-utility trade-off in medical LLM fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

DP-LoRA for private LLM fine-tuning
Calibrated noise injection for privacy
Multi-abnormality classification with DP
🔎 Similar Papers
No similar papers found.
P
Payel Bhattacharjee
Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ
F
Fengwei Tian
Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ
R
Ravi Tandon
Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ
J
Joseph Lo
Duke Department of Radiology, Duke Electrical and Computer Engineering, Durham, NC
H
Heidi Hanson
Oak Ridge National Laboratory, Oak Ridge, TN
G
Geoffrey Rubin
Department of Medical Imaging, University of Arizona, Tucson, AZ
N
Nirav Merchant
Data Science Institute, University of Arizona, Tucson, AZ
John Gounley
John Gounley
Oak Ridge National Laboratory