🤖 AI Summary
Frequent cloud cover in tropical regions severely limits the availability of optical remote sensing imagery, while deep learning models often lose critical spatial and spectral details during downsampling. Method: This paper proposes (1) a lightweight Normalized Difference Index (NDI) injection technique integrated at the decoder’s end to preserve key spatial features; (2) a physically constrained, realistic cloud synthesis and injection framework to systematically evaluate model robustness under cloud occlusion; and (3) a multimodal fusion strategy combining Sentinel-1 SAR and Sentinel-2 optical data. Results: On the DFC2020 dataset, NDI injection improves mIoU by 1.99% and 2.78% for U-Net and DeepLabV3, respectively, under cloud-free conditions. Under cloudy conditions, radar–optical fusion significantly outperforms optical-only input, demonstrating the effectiveness and generalizability of the proposed approach under complex meteorological conditions.
📝 Abstract
Supervised deep learning for land cover semantic segmentation (LCS) relies on labeled satellite data. However, most existing Sentinel-2 datasets are cloud-free, which limits their usefulness in tropical regions where clouds are common. To properly evaluate the extent of this problem, we developed a cloud injection algorithm that simulates realistic cloud cover, allowing us to test how Sentinel-1 radar data can fill in the gaps caused by cloud-obstructed optical imagery. We also tackle the issue of losing spatial and/or spectral details during encoder downsampling in deep networks. To mitigate this loss, we propose a lightweight method that injects Normalized Difference Indices (NDIs) into the final decoding layers, enabling the model to retain key spatial features with minimal additional computation. Injecting NDIs enhanced land cover segmentation performance on the DFC2020 dataset, yielding improvements of 1.99% for U-Net and 2.78% for DeepLabV3 on cloud-free imagery. Under cloud-covered conditions, incorporating Sentinel-1 data led to significant performance gains across all models compared to using optical data alone, highlighting the effectiveness of radar-optical fusion in challenging atmospheric scenarios.