Capturing Stable HDR Videos Using a Dual-Camera System

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address flickering and ghosting artifacts in alternating-exposure HDR video reconstruction caused by exposure fluctuations in reference frames, this paper proposes a dual-camera cooperative capture framework and an Exposure-Adaptive Fusion Network (EAFNet). The primary camera captures a stable single-exposure reference sequence, while the auxiliary camera synchronously acquires multi-exposure auxiliary frames. EAFNet incorporates a pre-alignment subnetwork and a reference-dominant asymmetric cross-feature fusion module to achieve precise feature alignment and dynamic weight assignment across exposures. Additionally, a discrete wavelet transform (DWT)-based multi-scale reconstruction scheme enhances fine-detail fidelity. Extensive evaluations on multiple benchmark datasets demonstrate that our method significantly suppresses flickering and motion-related artifacts, achieving state-of-the-art HDR video reconstruction quality. The source code and dataset are publicly available.

Technology Category

Application Category

📝 Abstract
In HDR video reconstruction, exposure fluctuations in reference images from alternating exposure methods often result in flickering. To address this issue, we propose a dual-camera system (DCS) for HDR video acquisition, where one camera is assigned to capture consistent reference sequences, while the other is assigned to capture non-reference sequences for information supplementation. To tackle the challenges posed by video data, we introduce an exposure-adaptive fusion network (EAFNet) to achieve more robust results. EAFNet introduced a pre-alignment subnetwork to explore the influence of exposure, selectively emphasizing the valuable features across different exposure levels. Then, the enhanced features are fused by the asymmetric cross-feature fusion subnetwork, which explores reference-dominated attention maps to improve image fusion by aligning cross-scale features and performing cross-feature fusion. Finally, the reconstruction subnetwork adopts a DWT-based multiscale architecture to reduce ghosting artifacts and refine features at different resolutions. Extensive experimental evaluations demonstrate that the proposed method achieves state-of-the-art performance on different datasets, validating the great potential of the DCS in HDR video reconstruction. The codes and data captured by DCS will be available at https://github.com/zqqqyu/DCS.
Problem

Research questions and friction points this paper is trying to address.

Reduces flickering in HDR videos from exposure fluctuations
Uses dual-camera system for stable reference and supplemental frames
Introduces EAFNet for exposure-adaptive fusion and artifact reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-camera system for stable HDR video
Exposure-adaptive fusion network for robust results
DWT-based multiscale architecture reduces ghosting
🔎 Similar Papers
No similar papers found.
Q
Qianyu Zhang
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
Bolun Zheng
Bolun Zheng
Hangzhou Dianzi Universiy
multimediacomputer vision
H
Hangjia Pan
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
L
Lingyu Zhu
Department of Computer Science, City University of Hong Kong
Z
Zunjie Zhu
Lishui Institute of Hangzhou Dianzi University
Zongpeng Li
Zongpeng Li
Tsinghua University
computer networksnetwork algorithmsnetwork coding
S
Shiqi Wang
Department of Computer Science, City University of Hong Kong