Cross-Modal Clustering-Guided Negative Sampling for Self-Supervised Joint Learning from Medical Images and Reports

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current medical image–report multimodal self-supervised learning faces three key challenges: suboptimal negative sampling (scarcity of hard negatives and false-negative interference), insufficient modeling of local fine-grained semantics, and loss of low-level visual details. To address these, we propose a cross-modal clustering-guided negative sampling strategy to mitigate false negatives and enhance discrimination of hard negatives; a cross-modal masked image modeling module that jointly captures local text–image semantics while explicitly preserving low-level visual features; and multi-granularity feature alignment coupled with cross-modal attention. Evaluated across five downstream datasets, our method achieves state-of-the-art performance on classification, detection, and segmentation tasks. It significantly improves the robustness and generalization capability of learned medical visual representations.

Technology Category

Application Category

📝 Abstract
Learning medical visual representations directly from paired images and reports through multimodal self-supervised learning has emerged as a novel and efficient approach to digital diagnosis in recent years. However, existing models suffer from several severe limitations. 1) neglecting the selection of negative samples, resulting in the scarcity of hard negatives and the inclusion of false negatives; 2) focusing on global feature extraction, but overlooking the fine-grained local details that are crucial for medical image recognition tasks; and 3) contrastive learning primarily targets high-level features but ignoring low-level details which are essential for accurate medical analysis. Motivated by these critical issues, this paper presents a Cross-Modal Cluster-Guided Negative Sampling (CM-CGNS) method with two-fold ideas. First, it extends the k-means clustering used for local text features in the single-modal domain to the multimodal domain through cross-modal attention. This improvement increases the number of negative samples and boosts the model representation capability. Second, it introduces a Cross-Modal Masked Image Reconstruction (CM-MIR) module that leverages local text-to-image features obtained via cross-modal attention to reconstruct masked local image regions. This module significantly strengthens the model's cross-modal information interaction capabilities and retains low-level image features essential for downstream tasks. By well handling the aforementioned limitations, the proposed CM-CGNS can learn effective and robust medical visual representations suitable for various recognition tasks. Extensive experimental results on classification, detection, and segmentation tasks across five downstream datasets show that our method outperforms state-of-the-art approaches on multiple metrics, verifying its superior performance.
Problem

Research questions and friction points this paper is trying to address.

Improves negative sample selection in medical image-report learning
Enhances fine-grained local detail extraction in medical images
Strengthens low-level feature retention for accurate medical analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal clustering for negative sampling
Masked image reconstruction with text features
Enhanced local and low-level feature retention
🔎 Similar Papers
No similar papers found.
L
Libin Lan
College of Computer Science and Engineering, Chongqing University of Technology
H
Hongxing Li
College of Computer Science and Engineering, Chongqing University of Technology
Z
Zunhui Xia
College of Computer Science and Engineering, Chongqing University of Technology
J
Juan Zhou
Department of Pharmacy, the Second Affiliated Hospital of Army Military Medical University
Xiaofei Zhu
Xiaofei Zhu
Chongqing University of Technology
Computer Science
Y
Yongmei Li
Department of Radiology, the First Affiliated Hospital of Chongqing Medical University
Yudong Zhang
Yudong Zhang
University of Leicester, HFWLA/FIET/FEAI/FBCS/SMIEEE/SMACM/DSACM, Clarivate Highly Cited Researcher
artificial intelligencedeep learningmedical image processing
Xin Luo
Xin Luo
University of Science and Technology of China
Computer Vision