🤖 AI Summary
To address the limited timeliness of global dust warning systems caused by difficulties in distinguishing dust aerosols from clouds and surface features, this paper proposes an end-to-end near-real-time dust detection method based on a 3D convolutional neural network. The approach innovatively incorporates split thermal infrared bands to enhance spectral discriminability, integrates multi-band normalization with local inpainting to mitigate MODIS data gaps, and establishes a spatial-spectral joint learning mechanism for pixel-level dust identification. Training efficiency is improved by 21×, enabling scalable global processing. Evaluated on 17 independent MODIS scenes, the model achieves an accuracy of 0.92 and a mean squared error of 0.014; dust plume core regions are detected with high fidelity to ground truth. These results validate both the effectiveness and practical applicability of the proposed method.
📝 Abstract
Dust storms harm health and reduce visibility; quick detection from satellites is needed. We present a near real-time system that flags dust at the pixel level using multi-band images from NASA's Terra and Aqua (MODIS). A 3D convolutional network learns patterns across all 36 bands, plus split thermal bands, to separate dust from clouds and surface features. Simple normalization and local filling handle missing data. An improved version raises training speed by 21x and supports fast processing of full scenes. On 17 independent MODIS scenes, the model reaches about 0.92 accuracy with a mean squared error of 0.014. Maps show strong agreement in plume cores, with most misses along edges. These results show that joint band-and-space learning can provide timely dust alerts at global scale; using wider input windows or attention-based models may further sharpen edges.