MoE-Health: A Mixture of Experts Framework for Robust Multimodal Healthcare Prediction

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of incomplete and heterogeneous multimodal clinical data (e.g., EHRs, medical images, clinical notes), this paper proposes a modality-missing-robust dynamic multimodal fusion framework. Methodologically, it introduces a Mixture-of-Experts (MoE) architecture comprising modality-specific expert networks and a learnable dynamic gating mechanism that adaptively routes and weights expert outputs based solely on the available modalities—requiring no complete multimodal input. Evaluated on three clinical prediction tasks using MIMIC-IV, the method consistently outperforms state-of-the-art baselines under diverse missingness patterns (e.g., single- or dual-modality absence), demonstrating superior performance stability and robustness. It significantly enhances the practicality and generalizability of multimodal fusion in real-world healthcare settings where data incompleteness is pervasive.

Technology Category

Application Category

📝 Abstract
Healthcare systems generate diverse multimodal data, including Electronic Health Records (EHR), clinical notes, and medical images. Effectively leveraging this data for clinical prediction is challenging, particularly as real-world samples often present with varied or incomplete modalities. Existing approaches typically require complete modality data or rely on manual selection strategies, limiting their applicability in real-world clinical settings where data availability varies across patients and institutions. To address these limitations, we propose MoE-Health, a novel Mixture of Experts framework designed for robust multimodal fusion in healthcare prediction. MoE-Health architecture is specifically developed to handle samples with differing modalities and improve performance on critical clinical tasks. By leveraging specialized expert networks and a dynamic gating mechanism, our approach dynamically selects and combines relevant experts based on available data modalities, enabling flexible adaptation to varying data availability scenarios. We evaluate MoE-Health on the MIMIC-IV dataset across three critical clinical prediction tasks: in-hospital mortality prediction, long length of stay, and hospital readmission prediction. Experimental results demonstrate that MoE-Health achieves superior performance compared to existing multimodal fusion methods while maintaining robustness across different modality availability patterns. The framework effectively integrates multimodal information, offering improved predictive performance and robustness in handling heterogeneous and incomplete healthcare data, making it particularly suitable for deployment in diverse healthcare environments with heterogeneous data availability.
Problem

Research questions and friction points this paper is trying to address.

Handling incomplete multimodal healthcare data effectively
Dynamically selecting experts for varying data availability
Improving robustness in clinical prediction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts framework for multimodal healthcare
Dynamic gating mechanism for modality selection
Robust fusion with incomplete multimodal data
🔎 Similar Papers
No similar papers found.