🤖 AI Summary
To address error accumulation and training instability caused by pseudo-label drift in semi-supervised remote sensing image semantic segmentation, this paper proposes a heterogeneous dual-student Vision Transformer (ViT) framework. The method integrates a ViT backbone, dual-student collaborative training, multi-granularity feature interaction, and text-driven guidance. Its core contributions are: (1) a novel explicit-implicit semantic co-guidance mechanism that jointly leverages CLIP text embeddings and learnable queries to enforce explicit semantic priors while enabling implicit semantic modeling; and (2) a CLIP-DINOv3-driven global-local feature collaboration strategy that enhances cross-domain generalization robustness. Evaluated on six mainstream remote sensing datasets under diverse annotation ratios and scene conditions, the proposed approach consistently achieves state-of-the-art performance—significantly outperforming existing methods in both segmentation accuracy and training stability.
📝 Abstract
Semi-supervised remote sensing (RS) image semantic segmentation offers a promising solution to alleviate the burden of exhaustive annotation, yet it fundamentally struggles with pseudo-label drift, a phenomenon where confirmation bias leads to the accumulation of errors during training. In this work, we propose Co2S, a stable semi-supervised RS segmentation framework that synergistically fuses priors from vision-language models and self-supervised models. Specifically, we construct a heterogeneous dual-student architecture comprising two distinct ViT-based vision foundation models initialized with pretrained CLIP and DINOv3 to mitigate error accumulation and pseudo-label drift. To effectively incorporate these distinct priors, an explicit-implicit semantic co-guidance mechanism is introduced that utilizes text embeddings and learnable queries to provide explicit and implicit class-level guidance, respectively, thereby jointly enhancing semantic consistency. Furthermore, a global-local feature collaborative fusion strategy is developed to effectively fuse the global contextual information captured by CLIP with the local details produced by DINOv3, enabling the model to generate highly precise segmentation results. Extensive experiments on six popular datasets demonstrate the superiority of the proposed method, which consistently achieves leading performance across various partition protocols and diverse scenarios. Project page is available at https://xavierjiezou.github.io/Co2S/.