Efficient Multi-Slide Visual-Language Feature Fusion for Placental Disease Classification

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Placental whole-slide image (WSI) classification faces two key challenges: (1) inefficient patch selection strategies that struggle to balance diagnostic performance and computational cost; and (2) loss of global histopathological context due to patch-level modeling. To address these, we propose a two-stage efficient patch selection module coupled with an adaptive graph learning–driven hybrid multimodal fusion mechanism, jointly integrating visual features, learned graph-structured tissue relationships, and clinical text reports for end-to-end modeling of critical pathological semantics. Our method unifies parameter-free image compression, learnable patch filtering, graph neural networks, and vision-language modeling, optimized end-to-end under patient-level supervision. Evaluated on our proprietary dataset and two public placental WSI benchmarks, our approach achieves state-of-the-art classification accuracy while significantly reducing computational overhead—effectively mitigating both global contextual deficiency and scalability bottlenecks in large-scale WSI analysis.

Technology Category

Application Category

📝 Abstract
Accurate prediction of placental diseases via whole slide images (WSIs) is critical for preventing severe maternal and fetal complications. However, WSI analysis presents significant computational challenges due to the massive data volume. Existing WSI classification methods encounter critical limitations: (1) inadequate patch selection strategies that either compromise performance or fail to sufficiently reduce computational demands, and (2) the loss of global histological context resulting from patch-level processing approaches. To address these challenges, we propose an Efficient multimodal framework for Patient-level placental disease Diagnosis, named EmmPD. Our approach introduces a two-stage patch selection module that combines parameter-free and learnable compression strategies, optimally balancing computational efficiency with critical feature preservation. Additionally, we develop a hybrid multimodal fusion module that leverages adaptive graph learning to enhance pathological feature representation and incorporates textual medical reports to enrich global contextual understanding. Extensive experiments conducted on both a self-constructed patient-level Placental dataset and two public datasets demonstrating that our method achieves state-of-the-art diagnostic performance. The code is available at https://github.com/ECNU-MultiDimLab/EmmPD.
Problem

Research questions and friction points this paper is trying to address.

Improves placental disease classification via multi-slide visual-language fusion
Addresses computational challenges in whole slide image analysis
Enhances global histological context retention in patch-level processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage patch selection for efficiency
Hybrid multimodal fusion with graphs
Textual reports enhance global context
Hang Guo
Hang Guo
Tsinghua University
Computer VisionEfficientMLLLMs
Q
Qing Zhang
East China Normal University, Shanghai, China
Z
Zixuan Gao
East China Normal University, Shanghai, China
Siyuan Yang
Siyuan Yang
Wallenberg-NTU Presidential Postdoctoral Fellowship, Nanyang Technological University
Computer VisionAction Recognition
S
Shulin Peng
East China Normal University, Shanghai, China
Xiang Tao
Xiang Tao
Institute of Automation, Chinese Academy of Sciences
Data MiningMisinformation DetectionGraph Representation Learning
T
Ting Yu
Obstetrics and Gynecology Hospital of Fudan University, Shanghai, China
Y
Yan Wang
East China Normal University, Shanghai, China
Q
Qingli Li
East China Normal University, Shanghai, China