๐ค AI Summary
In histopathological image segmentation, severe pseudo-label noise arises from ambiguous gland boundaries and morphological misclassification. To address this challenge under low-labeling regimes, this paper proposes the first multi-task Mixture-of-Experts (MoE) semi-supervised framework tailored for medical image segmentation. It innovatively integrates three specialized subnetworksโprimary segmentation, signed distance field regression, and boundary predictionโwithin an MoE architecture. A dynamic gating module refines pseudo-labels by adaptively fusing predictions across experts, while an adaptive multi-objective loss function automatically balances task-specific weights. Evaluated on the GlaS and CRAG benchmarks under extreme label scarcity (e.g., 10% labeled data), our method significantly outperforms state-of-the-art approaches. Results demonstrate that the MoE design effectively mitigates pseudo-label noise and enhances morphological awareness, confirming its efficacy and generalizability in low-data biomedical segmentation.
๐ Abstract
Semi-supervised learning has been employed to alleviate the need for extensive labeled data for histopathology image segmentation, but existing methods struggle with noisy pseudo-labels due to ambiguous gland boundaries and morphological misclassification. This paper introduces Semi-MOE, to the best of our knowledge, the first multi-task Mixture-of-Experts framework for semi-supervised histopathology image segmentation. Our approach leverages three specialized expert networks: A main segmentation expert, a signed distance field regression expert, and a boundary prediction expert, each dedicated to capturing distinct morphological features. Subsequently, the Multi-Gating Pseudo-labeling module dynamically aggregates expert features, enabling a robust fuse-and-refine pseudo-labeling mechanism. Furthermore, to eliminate manual tuning while dynamically balancing multiple learning objectives, we propose an Adaptive Multi-Objective Loss. Extensive experiments on GlaS and CRAG benchmarks show that our method outperforms state-of-the-art approaches in low-label settings, highlighting the potential of MoE-based architectures in advancing semi-supervised segmentation. Our code is available at https://github.com/vnlvi2k3/Semi-MoE.