ME-Mamba: Multi-Expert Mamba with Efficient Knowledge Capture and Fusion for Multimodal Survival Analysis

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pathological whole-slide images (WSIs) provide only slide-level labels, and in multimodal survival analysis, heterogeneous representations between histopathology and genomics often lead to modality-specific information loss. To address this, we propose a Multi-Expert Mamba architecture comprising dedicated histopathology and genomics expert modules, alongside a collaborative fusion expert. Our method is the first to integrate Mamba’s sequential scanning mechanism with Optimal Transport (OT) for token-level cross-modal alignment. We further enforce global distribution consistency via Maximum Mean Discrepancy (MMD) regularization and introduce a global cross-fusion loss to enhance complementary representation learning. Evaluated on five TCGA cancer cohorts, our approach achieves state-of-the-art survival prediction performance—delivering both high accuracy and low computational overhead. This work establishes a scalable, efficient paradigm for high-throughput multimodal prognostic modeling.

Technology Category

Application Category

📝 Abstract
Survival analysis using whole-slide images (WSIs) is crucial in cancer research. Despite significant successes, pathology images typically only provide slide-level labels, which hinders the learning of discriminative representations from gigapixel WSIs. With the rapid advancement of high-throughput sequencing technologies, multimodal survival analysis integrating pathology images and genomics data has emerged as a promising approach. We propose a Multi-Expert Mamba (ME-Mamba) system that captures discriminative pathological and genomic features while enabling efficient integration of both modalities. This approach achieves complementary information fusion without losing critical information from individual modalities, thereby facilitating accurate cancer survival analysis. Specifically, we first introduce a Pathology Expert and a Genomics Expert to process unimodal data separately. Both experts are designed with Mamba architectures that incorporate conventional scanning and attention-based scanning mechanisms, allowing them to extract discriminative features from long instance sequences containing substantial redundant or irrelevant information. Second, we design a Synergistic Expert responsible for modality fusion. It explicitly learns token-level local correspondences between the two modalities via Optimal Transport, and implicitly enhances distribution consistency through a global cross-modal fusion loss based on Maximum Mean Discrepancy. The fused feature representations are then passed to a mamba backbone for further integration. Through the collaboration of the Pathology Expert, Genomics Expert, and Synergistic Expert, our method achieves stable and accurate survival analysis with relatively low computational complexity. Extensive experimental results on five datasets in The Cancer Genome Atlas (TCGA) demonstrate our state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Integrating pathology images with genomics data for multimodal survival analysis
Capturing discriminative features from gigapixel WSIs with slide-level labels only
Achieving efficient fusion of pathological and genomic features without information loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Expert Mamba system for multimodal survival analysis
Pathology and Genomics Experts with Mamba architectures for feature extraction
Synergistic Expert using Optimal Transport for modality fusion
🔎 Similar Papers
No similar papers found.
C
Chengsheng Zhang
Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China
Linhao Qu
Linhao Qu
Ph.D. School of Basic Medical Sciences, Fudan University
computational pathologymedical image analysismultimodal information fusiondata mining
X
Xiaoyu Liu
Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China
Z
Zhijian Song
Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China