🤖 AI Summary
Adverse weather severely degrades visual perception, and existing fixed-parameter models trained on synthetic data struggle to generalize to complex real-world degradation scenarios. To address this, we propose a two-level reinforcement learning–driven adaptive image restoration framework: an upper-level meta-controller dynamically selects and schedules restoration models, while a lower-level local optimizer performs no-reference image quality–guided perturbation optimization. Leveraging the physics-informed, high-fidelity synthetic dataset HFLS-Weather, our method enables cold-start initialization and supports online adaptation without paired supervision. Crucially, it eliminates reliance on real paired training data. Extensive experiments across diverse real-world adverse conditions—including rain, fog, snow, and haze—demonstrate significant performance gains over state-of-the-art methods, with strong generalization and continuous adaptability. The source code is publicly available.
📝 Abstract
Adverse weather severely impairs real-world visual perception, while existing vision models trained on synthetic data with fixed parameters struggle to generalize to complex degradations. To address this, we first construct HFLS-Weather, a physics-driven, high-fidelity dataset that simulates diverse weather phenomena, and then design a dual-level reinforcement learning framework initialized with HFLS-Weather for cold-start training. Within this framework, at the local level, weather-specific restoration models are refined through perturbation-driven image quality optimization, enabling reward-based learning without paired supervision; at the global level, a meta-controller dynamically orchestrates model selection and execution order according to scene degradation. This framework enables continuous adaptation to real-world conditions and achieves state-of-the-art performance across a wide range of adverse weather scenarios. Code is available at https://github.com/xxclfy/AgentRL-Real-Weather