Annotation-Free Open-Vocabulary Segmentation for Remote-Sensing Images

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Remote sensing semantic segmentation faces dual challenges: poor generalization to novel classes and high annotation costs. Existing open-vocabulary methods struggle to accommodate the large scale variations, fine-grained details, and multimodal (optical/SAR) nature of remote sensing imagery. This paper introduces SegEarth-OV—the first annotation-free open-vocabulary segmentation framework for remote sensing. It proposes the novel SimFeatUp module for detail-aware feature upsampling and a global bias mitigation mechanism, integrated with AlignEarth—a cross-modal knowledge transfer strategy leveraging vision-language models. The framework synergistically combines semantic guidance from pretrained vision-language models, feature upsampling, local semantic enhancement, and unsupervised knowledge distillation—requiring neither task-specific annotations nor post-training adaptation. SegEarth-OV achieves state-of-the-art segmentation accuracy on both optical and SAR datasets, marking the first solution for efficient, generalizable, zero-annotation semantic parsing in Earth observation.

Technology Category

Application Category

📝 Abstract
Semantic segmentation of remote sensing (RS) images is pivotal for comprehensive Earth observation, but the demand for interpreting new object categories, coupled with the high expense of manual annotation, poses significant challenges. Although open-vocabulary semantic segmentation (OVSS) offers a promising solution, existing frameworks designed for natural images are insufficient for the unique complexities of RS data. They struggle with vast scale variations and fine-grained details, and their adaptation often relies on extensive, costly annotations. To address this critical gap, this paper introduces SegEarth-OV, the first framework for annotation-free open-vocabulary segmentation of RS images. Specifically, we propose SimFeatUp, a universal upsampler that robustly restores high-resolution spatial details from coarse features, correcting distorted target shapes without any task-specific post-training. We also present a simple yet effective Global Bias Alleviation operation to subtract the inherent global context from patch features, significantly enhancing local semantic fidelity. These components empower SegEarth-OV to effectively harness the rich semantics of pre-trained VLMs, making OVSS possible in optical RS contexts. Furthermore, to extend the framework's universality to other challenging RS modalities like SAR images, where large-scale VLMs are unavailable and expensive to create, we introduce AlignEarth, which is a distillation-based strategy and can efficiently transfer semantic knowledge from an optical VLM encoder to an SAR encoder, bypassing the need to build SAR foundation models from scratch and enabling universal OVSS across diverse sensor types. Extensive experiments on both optical and SAR datasets validate that SegEarth-OV can achieve dramatic improvements over the SOTA methods, establishing a robust foundation for annotation-free and open-world Earth observation.
Problem

Research questions and friction points this paper is trying to address.

Addresses annotation-free open-vocabulary segmentation for remote sensing images
Overcomes scale variations and fine-grained detail challenges in RS data
Extends open-vocabulary segmentation to SAR imagery without foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

SimFeatUp upsampler restores high-resolution spatial details
Global Bias Alleviation enhances local semantic fidelity
AlignEarth transfers knowledge from optical to SAR encoders
🔎 Similar Papers
No similar papers found.
Kaiyu Li
Kaiyu Li
Wilfrid Laurier University, Canada
Data governance and Data preparationData market and Data economy
X
Xiangyong Cao
School of Computer Science and Technology and Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an, 710049, China
Ruixun Liu
Ruixun Liu
Undergraduates of Xi'an Jiaotong University
computer vision
S
Shihong Wang
School of Computer Science and Technology and Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an, 710049, China
Z
Zixuan Jiang
College of Artificial Intelligence, Xi’an Jiaotong University, Xi’an, 710049, China
Z
Zhi Wang
School of Software Engineering, Xi’an Jiaotong University, Xi’an, 710049, China
Deyu Meng
Deyu Meng
Professor, Xi'an Jiaotong University
Machine LearningApplied MathematicsComputer VisionArtificial Intelligence