🤖 AI Summary
In endoscopic 3D reconstruction, intraoperative continuous zooming and lens rotation cause intrinsic camera parameters to vary dynamically, making accurate calibration infeasible; most existing methods neglect intrinsic parameter estimation, severely limiting reconstruction accuracy. This paper proposes the first self-supervised monocular depth estimation framework that jointly optimizes depth maps, camera poses, and time-varying intrinsics. Built upon Depth Anything V2, our method incorporates an attention mechanism to enhance pose estimation robustness and employs weight-decomposed low-rank adaptation (DoRA) for efficient fine-tuning of dynamic intrinsics. Evaluated on SCARED and C3VD benchmarks, our approach significantly outperforms state-of-the-art methods: depth error is reduced by 12.7%, and 3D reconstruction quality is markedly improved. The code and pretrained models are publicly released.
📝 Abstract
3D reconstruction of endoscopic surgery scenes plays a vital role in enhancing scene perception, enabling AR visualization, and supporting context-aware decision-making in image-guided surgery. A critical yet challenging step in this process is the accurate estimation of the endoscope's intrinsic parameters. In real surgical settings, intrinsic calibration is hindered by sterility constraints and the use of specialized endoscopes with continuous zoom and telescope rotation. Most existing methods for endoscopic 3D reconstruction do not estimate intrinsic parameters, limiting their effectiveness for accurate and reliable reconstruction. In this paper, we integrate intrinsic parameter estimation into a self-supervised monocular depth estimation framework by adapting the Depth Anything V2 (DA2) model for joint depth, pose, and intrinsics prediction. We introduce an attention-based pose network and a Weight-Decomposed Low-Rank Adaptation (DoRA) strategy for efficient fine-tuning of DA2. Our method is validated on the SCARED and C3VD public datasets, demonstrating superior performance compared to recent state-of-the-art approaches in self-supervised monocular depth estimation and 3D reconstruction. Code and model weights can be found in project repository: https://github.com/MOYF-beta/EndoSfM3D.