🤖 AI Summary
This work addresses the significant degradation of autonomous driving perception systems under adverse weather conditions—such as rain, fog, and snow—where single-sensor modalities struggle to reliably recognize complex environmental states. To this end, the paper proposes LRC-WeatherNet, the first real-time weather classification framework that fuses LiDAR, RADAR, and camera data in a tri-modal setting. The approach integrates early fusion via a unified bird’s-eye-view representation with mid-level feature fusion governed by a gating mechanism, dynamically adapting to the varying reliability of each sensor across different weather conditions. Experiments on the MSU-4S dataset demonstrate that LRC-WeatherNet consistently outperforms single-modality baselines across all nine weather classes, achieving both high classification accuracy and computational efficiency.
📝 Abstract
Autonomous vehicles face major perception and navigation challenges in adverse weather such as rain, fog, and snow, which degrade the performance of LiDAR, RADAR, and RGB camera sensors. While each sensor type offers unique strengths, such as RADAR robustness in poor visibility and LiDAR precision in clear conditions, they also suffer distinct limitations when exposed to environmental obstructions. This study proposes LRC-WeatherNet, a novel multi-sensor fusion framework that integrates LiDAR, RADAR, and camera data for real-time classification of weather conditions. By employing both early fusion using a unified Bird's Eye View representation and mid-level gated fusion of modality-specific feature maps, our approach adapts to the varying reliability of each sensor under changing weather. Evaluated on the extensive MSU-4S dataset covering nine weather types, LRC-WeatherNet achieves superior classification performance and computational efficiency, significantly outperforming unimodal baselines in adverse conditions. This work is the first to combine all three modalities for robust, real-time weather classification in autonomous driving. We release our trained models and source code in https://github.com/nouralhudaalbashir/LRC-WeatherNet.