🤖 AI Summary
Current medical image–report multimodal self-supervised learning faces three key challenges: suboptimal negative sampling (scarcity of hard negatives and false-negative interference), insufficient modeling of local fine-grained semantics, and loss of low-level visual details. To address these, we propose a cross-modal clustering-guided negative sampling strategy to mitigate false negatives and enhance discrimination of hard negatives; a cross-modal masked image modeling module that jointly captures local text–image semantics while explicitly preserving low-level visual features; and multi-granularity feature alignment coupled with cross-modal attention. Evaluated across five downstream datasets, our method achieves state-of-the-art performance on classification, detection, and segmentation tasks. It significantly improves the robustness and generalization capability of learned medical visual representations.
📝 Abstract
Learning medical visual representations directly from paired images and reports through multimodal self-supervised learning has emerged as a novel and efficient approach to digital diagnosis in recent years. However, existing models suffer from several severe limitations. 1) neglecting the selection of negative samples, resulting in the scarcity of hard negatives and the inclusion of false negatives; 2) focusing on global feature extraction, but overlooking the fine-grained local details that are crucial for medical image recognition tasks; and 3) contrastive learning primarily targets high-level features but ignoring low-level details which are essential for accurate medical analysis. Motivated by these critical issues, this paper presents a Cross-Modal Cluster-Guided Negative Sampling (CM-CGNS) method with two-fold ideas. First, it extends the k-means clustering used for local text features in the single-modal domain to the multimodal domain through cross-modal attention. This improvement increases the number of negative samples and boosts the model representation capability. Second, it introduces a Cross-Modal Masked Image Reconstruction (CM-MIR) module that leverages local text-to-image features obtained via cross-modal attention to reconstruct masked local image regions. This module significantly strengthens the model's cross-modal information interaction capabilities and retains low-level image features essential for downstream tasks. By well handling the aforementioned limitations, the proposed CM-CGNS can learn effective and robust medical visual representations suitable for various recognition tasks. Extensive experimental results on classification, detection, and segmentation tasks across five downstream datasets show that our method outperforms state-of-the-art approaches on multiple metrics, verifying its superior performance.