🤖 AI Summary
Existing low-light image enhancement methods often overlook the physical noise characteristics inherent in the imaging process, leading to distorted outputs or residual noise. This work proposes a robust enhancement framework grounded in a novel paradigm: treating physical noise as an adversarial attack. By leveraging Physical Degradation Synthesis (PDS) and inverse ISP modeling, the method simulates realistic noise attacks and introduces a dual-layer adaptive defense mechanism—comprising Degradation-Aware Mixture of Experts (DA-MoE) and Adaptive Metric Defense (AMD)—to dynamically respond to varying degradation intensities. To the best of our knowledge, this is the first approach to explicitly model physical noise as an adversarial perturbation, enabling accurate representation of real-world low-light degradations. The framework effectively suppresses noise while preserving structural details, significantly improving plug-and-play performance.
📝 Abstract
Limited illumination often causes severe physical noise and detail degradation in images. Existing Low-Light Image Enhancement (LLIE) methods frequently treat the enhancement process as a blind black-box mapping, overlooking the physical noise transformation during imaging, leading to suboptimal performance. To address this, we propose a novel LLIE approach, conceptually formulated as a physics-based attack and display-adaptive defense paradigm. Specifically, on the attack side, we establish a physics-based Degradation Synthesis (PDS) pipeline. Unlike standard data augmentation, PDS explicitly models Image Signal Processor (ISP) inversion to the RAW domain, injects physically plausible photon and read noise, and re-projects the data to the sRGB domain. This generates high-fidelity training pairs with explicitly parameterized degradation vectors, effectively simulating realistic attacks on clean signals. On the defense side, we construct a dual-layer fortified system. A noise predictor estimates degradation parameters from the input sRGB image. These estimates guide a degradation-aware Mixture of Experts (DA-MoE), which dynamically routes features to experts specialized in handling specific noise intensities. Furthermore, we introduce an Adaptive Metric Defense (AMD) mechanism, dynamically calibrating the feature embedding space based on noise severity, ensuring robust representation learning under severe degradation. Extensive experiments demonstrate that our approach offers significant plug-and-play performance enhancement for existing benchmark LLIE methods, effectively suppressing real-world noise while preserving structural fidelity. The sourced code is available at https://github.com/bywlzts/Attack-defense-llie.