🤖 AI Summary
To address image quality degradation caused by strong-light shadows and low-illumination conditions at night—thereby impairing perception performance in autonomous driving—this paper proposes the first end-to-end image enhancement pipeline that unifies shadow correction and nighttime adaptive enhancement. The method jointly optimizes illumination uniformity and visual perceptual quality via multi-scale illumination estimation and contrast constraints, integrating local histogram equalization with global semantic guidance while preserving physical plausibility. Experimental results demonstrate that our approach significantly outperforms CLAHE in illumination uniformity metrics. When applied to downstream tasks, it improves the mIoU of YOLO-based drivable area segmentation by 6.2% and boosts nighttime object detection recall by 11.4%, all while maintaining color fidelity and fine-grained texture details.
📝 Abstract
Enhancement of images from RGB cameras is of particular interest due to its wide range of ever-increasing applications such as medical imaging, satellite imaging, automated driving, etc. In autonomous driving, various techniques are used to enhance image quality under challenging lighting conditions. These include artificial augmentation to improve visibility in poor nighttime conditions, illumination-invariant imaging to reduce the impact of lighting variations, and shadow mitigation to ensure consistent image clarity in bright daylight. This paper proposes a pipeline for Shadow Erosion and Nighttime Adaptability in images for automated driving applications while preserving color and texture details. The Shadow Erosion and Nighttime Adaptability pipeline is compared to the widely used CLAHE technique and evaluated based on illumination uniformity and visual perception quality metrics. The results also demonstrate a significant improvement over CLAHE, enhancing a YOLO-based drivable area segmentation algorithm.