π€ AI Summary
To address SLAM tracking drift caused by photometric inconsistency on non-Lambertian surfaces and respiratory motion in endoscopic surgery, this paper proposes a highly robust real-time 3D reconstruction method. The approach jointly optimizes SLAM pose estimation and scene reconstruction. Key contributions include: (1) the first optical-flow-constrained 3D Gaussian splatting optimization framework incorporating geometric consistency priors; (2) a depth regularization strategy ensuring the validity and stability of sparse endoscopic depth measurements; and (3) a keyframe-quality-aware adaptive Gaussian refinement mechanism to enhance modeling accuracy in dynamic scenes. Evaluated on C3VD and StereoMIS datasets, the method achieves a 2.1 dB PSNR improvement in novel-view synthesis and reduces absolute trajectory error (ATE) by 37% over state-of-the-art methods. It supports real-time operation in both static and dynamic surgical scenarios.
π Abstract
Efficient three-dimensional reconstruction and real-time visualization are critical in surgical scenarios such as endoscopy. In recent years, 3D Gaussian Splatting (3DGS) has demonstrated remarkable performance in efficient 3D reconstruction and rendering. Most 3DGS-based Simultaneous Localization and Mapping (SLAM) methods only rely on the appearance constraints for optimizing both 3DGS and camera poses. However, in endoscopic scenarios, the challenges include photometric inconsistencies caused by non-Lambertian surfaces and dynamic motion from breathing affects the performance of SLAM systems. To address these issues, we additionally introduce optical flow loss as a geometric constraint, which effectively constrains both the 3D structure of the scene and the camera motion. Furthermore, we propose a depth regularisation strategy to mitigate the problem of photometric inconsistencies and ensure the validity of 3DGS depth rendering in endoscopic scenes. In addition, to improve scene representation in the SLAM system, we improve the 3DGS refinement strategy by focusing on viewpoints corresponding to Keyframes with suboptimal rendering quality frames, achieving better rendering results. Extensive experiments on the C3VD static dataset and the StereoMIS dynamic dataset demonstrate that our method outperforms existing state-of-the-art methods in novel view synthesis and pose estimation, exhibiting high performance in both static and dynamic surgical scenes. The source code will be publicly available upon paper acceptance.