Few-Shot Remote Sensing Image Scene Classification with CLIP and Prompt Learning

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Remote sensing scene classification faces challenges of scarce labeled data, significant cross-domain distribution shifts, and poor domain adaptability of vision-language models (e.g., CLIP). Method: We propose a remote sensing–oriented few-shot prompt learning framework. It systematically integrates context optimization, conditional prompting, multimodal prompting, and semantic regularization prompting, and introduces a novel self-regulating constrained prompting mechanism to achieve lightweight, efficient semantic adaptation of CLIP. Only prompt parameters are fine-tuned while image and text encoders remain frozen—preserving zero-shot transferability. Contribution/Results: Our method consistently outperforms zero-shot CLIP and linear-probe baselines across multiple remote sensing benchmarks, achieving state-of-the-art performance—especially in cross-dataset generalization. It establishes a new paradigm for domain adaptation of vision-language models in remote sensing, enabling effective knowledge transfer with minimal parameter updates.

Technology Category

Application Category

📝 Abstract
Remote sensing applications increasingly rely on deep learning for scene classification. However, their performance is often constrained by the scarcity of labeled data and the high cost of annotation across diverse geographic and sensor domains. While recent vision-language models like CLIP have shown promise by learning transferable representations at scale by aligning visual and textual modalities, their direct application to remote sensing remains suboptimal due to significant domain gaps and the need for task-specific semantic adaptation. To address this critical challenge, we systematically explore prompt learning as a lightweight and efficient adaptation strategy for few-shot remote sensing image scene classification. We evaluate several representative methods, including Context Optimization, Conditional Context Optimization, Multi-modal Prompt Learning, and Prompting with Self-Regulating Constraints. These approaches reflect complementary design philosophies: from static context optimization to conditional prompts for enhanced generalization, multi-modal prompts for joint vision-language adaptation, and semantically regularized prompts for stable learning without forgetting. We benchmark these prompt-learning methods against two standard baselines: zero-shot CLIP with hand-crafted prompts and a linear probe trained on frozen CLIP features. Through extensive experiments on multiple benchmark remote sensing datasets, including cross-dataset generalization tests, we demonstrate that prompt learning consistently outperforms both baselines in few-shot scenarios. Notably, Prompting with Self-Regulating Constraints achieves the most robust cross-domain performance. Our findings underscore prompt learning as a scalable and efficient solution for bridging the domain gap in satellite and aerial imagery, providing a strong foundation for future research in this field.
Problem

Research questions and friction points this paper is trying to address.

Addressing domain gaps in remote sensing scene classification
Improving few-shot learning with lightweight prompt adaptation
Enhancing cross-domain generalization for satellite imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt learning adapts CLIP for remote sensing
Multi-modal prompts align vision-language satellite data
Self-regulating constraints enable robust cross-domain generalization
🔎 Similar Papers
No similar papers found.