🤖 AI Summary
To address slow convergence and unstable cross-regional generalization in semantic segmentation of SAR imagery—particularly for water body detection—caused by complex statistical distributions, this paper introduces Mode Normalization (MN) as a drop-in replacement for standard normalization layers within U-Net and SegNet backbones, without altering network architecture or increasing parameter count. The proposed method significantly accelerates training convergence (reducing average training time by ~40%), improves generalization stability across diverse geographical regions (decreasing cross-validation standard deviation by 32%), while maintaining segmentation accuracy comparable to baseline models and enhancing inference efficiency. This work establishes a lightweight, robust, and plug-and-play normalization paradigm tailored specifically for SAR remote sensing image segmentation.
📝 Abstract
Segmenting Synthetic Aperture Radar (SAR) images is crucial for many remote sensing applications, particularly water body detection. However, deep learning-based segmentation models often face challenges related to convergence speed and stability, mainly due to the complex statistical distribution of this type of data. In this study, we evaluate the impact of mode normalization on two widely used semantic segmentation models, U-Net and SegNet. Specifically, we integrate mode normalization, to reduce convergence time while maintaining the performance of the baseline models. Experimental results demonstrate that mode normalization significantly accelerates convergence. Furthermore, cross-validation results indicate that normalized models exhibit increased stability in different zones. These findings highlight the effectiveness of normalization in improving computational efficiency and generalization in SAR image segmentation.