🤖 AI Summary
Preoperative 3D human pose and deformation estimation (PSE) on the bed is critical for precise positioning planning, augmented reality (AR) surgical navigation, and postoperative rehabilitation. However, existing RGB-D, infrared, or pressure-map-based approaches suffer from severe occlusion by bedding and robustness issues under complex body poses. This paper proposes the first multimodal PSE framework integrating routine clinical CT scans with depth maps. Our method innovatively incorporates CT-derived geometric feature extraction, a probabilistic correspondence-based alignment module for deformation estimation, and a lightweight neural network for pose regression—effectively overcoming modeling bottlenecks induced by bedding occlusion. Evaluated on clinical phantoms and healthy volunteers, our approach reduces pose and deformation estimation errors by 23.0% and 49.16%, respectively, significantly improving AR image registration accuracy and preoperative positioning reliability.
📝 Abstract
In perioperative care, precise in-bed 3D patient pose and shape estimation (PSE) can be vital in optimizing patient positioning in preoperative planning, enabling accurate overlay of medical images for augmented reality-based surgical navigation, and mitigating risks of prolonged immobility during recovery. Conventional PSE methods relying on modalities such as RGB-D, infrared, or pressure maps often struggle with occlusions caused by bedding and complex patient positioning, leading to inaccurate estimation that can affect clinical outcomes. To address these challenges, we present the first multi-modal in-bed patient 3D PSE network that fuses detailed geometric features extracted from routinely acquired computed tomography (CT) scans with depth maps (mPSE-CT). mPSE-CT incorporates a shape estimation module that utilizes probabilistic correspondence alignment, a pose estimation module with a refined neural network, and a final parameters mixing module. This multi-modal network robustly reconstructs occluded body regions and enhances the accuracy of the estimated 3D human mesh model. We validated mPSE-CT using proprietary whole-body rigid phantom and volunteer datasets in clinical scenarios. mPSE-CT outperformed the best-performing prior method by 23% and 49.16% in pose and shape estimation respectively, demonstrating its potential for improving clinical outcomes in challenging perioperative environments.