EndoSfM3D: Learning to 3D Reconstruct Any Endoscopic Surgery Scene using Self-supervised Foundation Model

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In endoscopic 3D reconstruction, intraoperative continuous zooming and lens rotation cause intrinsic camera parameters to vary dynamically, making accurate calibration infeasible; most existing methods neglect intrinsic parameter estimation, severely limiting reconstruction accuracy. This paper proposes the first self-supervised monocular depth estimation framework that jointly optimizes depth maps, camera poses, and time-varying intrinsics. Built upon Depth Anything V2, our method incorporates an attention mechanism to enhance pose estimation robustness and employs weight-decomposed low-rank adaptation (DoRA) for efficient fine-tuning of dynamic intrinsics. Evaluated on SCARED and C3VD benchmarks, our approach significantly outperforms state-of-the-art methods: depth error is reduced by 12.7%, and 3D reconstruction quality is markedly improved. The code and pretrained models are publicly released.

Technology Category

Application Category

📝 Abstract
3D reconstruction of endoscopic surgery scenes plays a vital role in enhancing scene perception, enabling AR visualization, and supporting context-aware decision-making in image-guided surgery. A critical yet challenging step in this process is the accurate estimation of the endoscope's intrinsic parameters. In real surgical settings, intrinsic calibration is hindered by sterility constraints and the use of specialized endoscopes with continuous zoom and telescope rotation. Most existing methods for endoscopic 3D reconstruction do not estimate intrinsic parameters, limiting their effectiveness for accurate and reliable reconstruction. In this paper, we integrate intrinsic parameter estimation into a self-supervised monocular depth estimation framework by adapting the Depth Anything V2 (DA2) model for joint depth, pose, and intrinsics prediction. We introduce an attention-based pose network and a Weight-Decomposed Low-Rank Adaptation (DoRA) strategy for efficient fine-tuning of DA2. Our method is validated on the SCARED and C3VD public datasets, demonstrating superior performance compared to recent state-of-the-art approaches in self-supervised monocular depth estimation and 3D reconstruction. Code and model weights can be found in project repository: https://github.com/MOYF-beta/EndoSfM3D.
Problem

Research questions and friction points this paper is trying to address.

Estimating endoscope intrinsic parameters under surgical sterility constraints
Reconstructing 3D endoscopic scenes with continuous zoom and rotation challenges
Integrating intrinsic calibration into self-supervised monocular depth estimation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised monocular depth estimation with intrinsic parameter prediction
Attention-based pose network for improved pose estimation
Weight-Decomposed Low-Rank Adaptation for efficient model fine-tuning
🔎 Similar Papers
No similar papers found.
C
Changhao Zhang
UCL Hawkes Institute and Department of Medical Physics and Biomedical Engineering, University College London, UK
Matthew J. Clarkson
Matthew J. Clarkson
Professor of Biomedical Engineering at University College London
Image Guided SurgeryMedical Image ComputingImage RegistrationComputer Vision
M
Mobarak I. Hoque
UCL Hawkes Institute and Department of Medical Physics and Biomedical Engineering, University College London, UK