Self-Supervision via Controlled Transformation and Unpaired Self-Conditioning for Low-Light Image Enhancement

📅 2025-03-01
🏛️ IEEE Transactions on Instrumentation and Measurement
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenging problem of unsupervised low-light image enhancement in the absence of paired real-world low-light/normal-light training data. We propose an end-to-end unsupervised framework featuring two key innovations: (1) a controllable transform-based self-supervision mechanism, leveraging invertible brightness/contrast transformations to enforce photometric consistency and enhance stability; and (2) an unpaired self-conditioning strategy integrating low-gradient magnitude suppression, detail-preserving noise modeling, and contrastive learning to achieve pixel-wise adaptive intensity control—effectively mitigating artifacts and over-enhancement. The method operates entirely without paired supervision. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant PSNR and SSIM improvements and more natural visual quality. Ablation studies confirm the efficacy of each component.

Technology Category

Application Category

📝 Abstract
Real-world low-light images captured by imaging devices suffer from poor visibility and require a domain-specific enhancement to produce artifact-free outputs that reveal details. However, it is usually challenging to create large-scale paired real-world low-light image datasets for training enhancement approaches. When trained with limited data, most supervised approaches do not perform well in generalizing to a wide variety of real-world images. In this article, we propose an unpaired low-light image enhancement network leveraging novel controlled transformation-based self-supervision and unpaired self-conditioning strategies. The model determines the required degrees of enhancement at the input image pixels, which are learned from the unpaired low-lit and well-lit images without any direct supervision. The self-supervision is based on a controlled transformation of the input image and subsequent maintenance of its enhancement in spite of the transformation. The self-conditioning performs training of the model on unpaired images such that it does not enhance an already-enhanced image or a well-lit input image. The inherent noise in the input low-light images is handled by employing low gradient magnitude suppression in a detail-preserving manner. In addition, our noise handling is self-conditioned by preventing the denoising of noise-free well-lit images. The training based on low-light image enhancement-specific attributes allows our model to avoid paired supervision without compromising significantly in performance. While our proposed self-supervision aids consistent enhancement, our novel self-conditioning facilitates adequate enhancement. Extensive experiments on multiple standard datasets demonstrate that our model, in general, outperforms the state-of-the-art both quantitatively and subjectively. Ablation studies show the effectiveness of our self-supervision and self-conditioning strategies, and the related loss functions.
Problem

Research questions and friction points this paper is trying to address.

Enhances low-light images without paired supervision
Handles noise while preserving image details
Uses self-supervision and self-conditioning for consistent enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled transformation-based self-supervision for enhancement
Unpaired self-conditioning to avoid over-enhancement
Low gradient magnitude suppression for noise handling
🔎 Similar Papers
No similar papers found.