π€ AI Summary
To address inaccurate inter-view correlation modeling caused by large disparities in wide-baseline multi-view image compression, this paper proposes 3D-LMVICβthe first learning-based multi-view image compression framework incorporating 3D Gaussian Splatting as a geometric scene prior. Methodologically, it introduces a learnable disparity estimation network for accurate cross-view disparity prediction, jointly optimized with a lightweight depth-map entropy coder and adaptive view-sequence reordering to suppress geometric redundancy and enhance temporal correlation. Evaluated on multiple benchmarks, 3D-LMVIC significantly outperforms HEVC-MV and state-of-the-art learning-based methods, achieving an average BD-rate reduction of 18.7%. It operates at real-time speed (>30 FPS) with robust compression performance, establishing a new paradigm for efficient and reliable multi-view compression in applications such as VR and autonomous driving.
π Abstract
Multi-view image compression is vital for 3D-related applications. To effectively model correlations between views, existing methods typically predict disparity between two views on a 2D plane, which works well for small disparities, such as in stereo images, but struggles with larger disparities caused by significant view changes. To address this, we propose a novel approach: learning-based multi-view image coding with 3D Gaussian geometric priors (3D-GP-LMVIC). Our method leverages 3D Gaussian Splatting to derive geometric priors of the 3D scene, enabling more accurate disparity estimation across views within the compression model. Additionally, we introduce a depth map compression model to reduce redundancy in geometric information between views. A multi-view sequence ordering method is also proposed to enhance correlations between adjacent views. Experimental results demonstrate that 3D-GP-LMVIC surpasses both traditional and learning-based methods in performance, while maintaining fast encoding and decoding speed.