Benchmarking Foundation Models with Multimodal Public Electronic Health Records

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges—performance degradation, fairness disparities, and lack of interpretability—in deploying multimodal electronic health record (EHR) foundation models clinically. To this end, we establish the first standardized multimodal EHR benchmark built on MIMIC-IV. We propose a unified preprocessing pipeline for heterogeneous clinical data and conduct the first systematic evaluation of eight state-of-the-art unimodal and multimodal foundation models—including both domain-specific and general-purpose architectures—on predictive tasks. Experimental results demonstrate that multimodal fusion consistently and significantly improves predictive performance across diverse clinical outcomes, without exacerbating inter-group bias; furthermore, model behavior exhibits inherent interpretability. All code, preprocessing protocols, and evaluation frameworks are fully open-sourced to enable reproducible, verifiable, and trustworthy medical AI research.

Technology Category

Application Category

📝 Abstract
Foundation models have emerged as a powerful approach for processing electronic health records (EHRs), offering flexibility to handle diverse medical data modalities. In this study, we present a comprehensive benchmark that evaluates the performance, fairness, and interpretability of foundation models, both as unimodal encoders and as multimodal learners, using the publicly available MIMIC-IV database. To support consistent and reproducible evaluation, we developed a standardized data processing pipeline that harmonizes heterogeneous clinical records into an analysis-ready format. We systematically compared eight foundation models, encompassing both unimodal and multimodal models, as well as domain-specific and general-purpose variants. Our findings demonstrate that incorporating multiple data modalities leads to consistent improvements in predictive performance without introducing additional bias. Through this benchmark, we aim to support the development of effective and trustworthy multimodal artificial intelligence (AI) systems for real-world clinical applications. Our code is available at https://github.com/nliulab/MIMIC-Multimodal.
Problem

Research questions and friction points this paper is trying to address.

Evaluating performance, fairness, interpretability of foundation models in EHRs
Standardizing data processing for heterogeneous clinical records analysis
Comparing multimodal and unimodal models for clinical AI applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized pipeline for harmonizing clinical records
Benchmarking eight foundation models systematically
Multimodal models improve predictive performance consistently
🔎 Similar Papers
No similar papers found.
K
Kunyu Yu
Centre for Quantitative Medicine and Duke-NUS AI + Medical Science Initiative, Duke-NUS Medical School, Singapore, Singapore
R
Rui Yang
Centre for Quantitative Medicine and Duke-NUS AI + Medical Science Initiative, Duke-NUS Medical School, Singapore, Singapore
J
Jingchi Liao
Centre for Quantitative Medicine and Duke-NUS AI + Medical Science Initiative, Duke-NUS Medical School, Singapore, Singapore
S
Siqi Li
Centre for Quantitative Medicine and Duke-NUS AI + Medical Science Initiative, Duke-NUS Medical School, Singapore, Singapore
Huitao Li
Huitao Li
Duke-Nus Medical School
Medical Informatics
Irene Li
Irene Li
Project Lecturer (特任講師) at University of Tokyo
Large Language ModelsGraph Neural NetworksBioNLPMedical NLPText Summarization
Y
Yifan Peng
Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
Rishikesan Kamaleswaran
Rishikesan Kamaleswaran
Duke University
Host-ResponseInjuryCritical CareMachine LearningArtificial Intelligence
N
Nan Liu
Centre for Quantitative Medicine, Duke-NUS AI + Medical Science Initiative and Programme in Health Services and Systems Research, Duke-NUS Medical School and NUS Artificial Intelligence Institute, National University of Singapore, Singapore, Singapore