π€ AI Summary
To address modality heterogeneity, class imbalance, and insufficient interpretability in land-cover classification using optical and SAR remote sensing imagery under natural scenes, this paper proposes CLAIREβa dual-encoder framework with cross-modal attention fusion. We design the RIFT hybrid loss function, integrating weighted Focal Loss and Tversky Loss, to mitigate long-tail class distribution. Moreover, we introduce Phi-3, a small language model, as the first sample-level semantic explanation module for remote sensing classification, enhancing rare-class recognition and decision transparency. Evaluated on WHU-OPT-SAR and OpenEarthMap-SAR benchmarks, CLAIRE achieves 56.02% and 59.89% mean Intersection-over-Union (mIoU), respectively; under cloud occlusion, it maintains 86.86% mIoU. These results significantly surpass state-of-the-art methods, demonstrating superior accuracy, robustness to environmental degradation, and model interpretability.
π Abstract
Accurate land cover classification from satellite imagery is crucial in environmental monitoring and sustainable resource management. However, it remains challenging due to the complexity of natural landscapes, the visual similarity between classes, and the significant class imbalance in the available datasets. To address these issues, we propose a dual encoder architecture that independently extracts modality-specific features from optical and Synthetic Aperture Radar (SAR) imagery, which are then fused using a cross-modality attention-fusion module named Cross-modality Land cover segmentation with Attention and Imbalance- aware Reasoning-Enhanced Explanations (CLAIRE). This fusion mechanism highlights complementary spatial and textural features, enabling the network to better capture detailed and diverse land cover patterns. We incorporate a hybrid loss function that utilizes Weighted Focal Loss and Tversky Loss named RIFT (Rare-Instance Focal-Tversky) to address class imbalance and improve segmentation performance across underrepresented categories. Our model achieves competitive performance across multiple benchmarks: a mean Intersection over Union (mIoU) of 56.02% and Overall Accuracy (OA) of 84.56% on the WHU-OPT-SAR dataset; strong generalization with a mIoU of 59.89% and OA of 73.92% on the OpenEarthMap-SAR dataset; and remarkable robustness under cloud-obstructed conditions, achieving an mIoU of 86.86% and OA of 94.58% on the PIE-RGB-SAR dataset. Additionally, we introduce a metric-driven reasoning module generated by a Small Language Model (Phi-3), which generates expert-level, sample-specific justifications for model predictions, thereby enhancing transparency and interpretability.