Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address segmentation challenges in medical imaging federated learning—stemming from single-sample annotation, cross-modality (MR/CT) discrepancies, and cross-device heterogeneity—this paper proposes the first federated self-supervised single-shot segmentation framework. Methodologically, it pioneers the adaptation of self-supervised few-shot segmentation to federated settings, integrating contrastive pretraining, multimodal feature alignment, and an enhanced Dice loss. The framework enables collaborative training across heterogeneous modality clients without sharing raw data or labels. On unseen local validation sets, it matches or surpasses the performance of a FedAvg-based CoWPro variant, demonstrating superior generalization and robustness. Key contributions include: (1) establishing federated self-supervised single-shot segmentation as a novel research direction; (2) enabling effective cross-modal federated joint training; and (3) empirically validating the feasibility of multi-center medical image segmentation under ultra-low annotation budgets.

Technology Category

Application Category

📝 Abstract
Decentralized federated learning enables learning of data representations from multiple sources without compromising the privacy of the clients. In applications like medical image segmentation, where obtaining a large annotated dataset from a single source is a distressing problem, federated self-supervised learning can provide some solace. In this work, we push the limits further by exploring a federated self-supervised one-shot segmentation task representing a more data-scarce scenario. We adopt a pre-existing self-supervised few-shot segmentation framework CoWPro and adapt it to the federated learning scenario. To the best of our knowledge, this work is the first to attempt a self-supervised few-shot segmentation task in the federated learning domain. Moreover, we consider the clients to be constituted of data from different modalities and imaging techniques like MR or CT, which makes the problem even harder. Additionally, we reinforce and improve the baseline CoWPro method using a fused dice loss which shows considerable improvement in performance over the baseline CoWPro. Finally, we evaluate this novel framework on a completely unseen held-out part of the local client dataset. We observe that the proposed framework can achieve performance at par or better than the FedAvg version of the CoWPro framework on the held-out validation dataset.
Problem

Research questions and friction points this paper is trying to address.

Federated self-supervised one-shot segmentation for data-scarce scenarios
Cross-modal and cross-imaging technique segmentation in federated learning
Improving baseline CoWPro with fused dice loss for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated self-supervised one-shot segmentation
Adapted CoWPro for federated learning
Fused dice loss improves baseline
🔎 Similar Papers
No similar papers found.