🤖 AI Summary
Existing neural video codecs (NVCs) suffer from a structural mismatch between their reference frame configuration and hierarchical quality representation, limiting coding efficiency. To address this, we propose EHVC—an Efficient Hierarchical Video Coding framework—that jointly optimizes three key components to align these structures: (1) a hierarchical multi-reference mechanism to strengthen temporal dependency modeling; (2) a forward-frame context pre-read strategy to improve motion compensation accuracy; and (3) layer-wise quality-scale constraints coupled with stochastic quality-level training to ensure stable reconstruction quality. EHVC is trained end-to-end, integrating classical coding principles with deep learning advantages. Extensive experiments on standard benchmark datasets demonstrate that EHVC significantly outperforms state-of-the-art methods, achieving an average 18.7% BD-rate reduction while improving both PSNR and MS-SSIM—validating its superior compression efficiency and reconstruction fidelity.
📝 Abstract
Neural video codecs (NVCs), leveraging the power of end-to-end learning, have demonstrated remarkable coding efficiency improvements over traditional video codecs. Recent research has begun to pay attention to the quality structures in NVCs, optimizing them by introducing explicit hierarchical designs. However, less attention has been paid to the reference structure design, which fundamentally should be aligned with the hierarchical quality structure. In addition, there is still significant room for further optimization of the hierarchical quality structure. To address these challenges in NVCs, we propose EHVC, an efficient hierarchical neural video codec featuring three key innovations: (1) a hierarchical multi-reference scheme that draws on traditional video codec design to align reference and quality structures, thereby addressing the reference-quality mismatch; (2) a lookahead strategy to utilize an encoder-side context from future frames to enhance the quality structure; (3) a layer-wise quality scale with random quality training strategy to stabilize quality structures during inference. With these improvements, EHVC achieves significantly superior performance to the state-of-the-art NVCs. Code will be released in: https://github.com/bytedance/NEVC.