🤖 AI Summary
In spinal surgery, pedicle screw placement relies heavily on manual drilling and fixation by surgeons, while existing robotic systems provide only passive navigation—compromising both safety and operational efficiency. Method: This study proposes a real-time, surgeon-in-the-loop cognitive human–robot collaboration framework. It introduces a novel vision-based attention-driven surgeon intention recognition model, integrated with an augmented reality (AR)–haptic interface and shared autonomy control, enabling dynamic physical interaction modeling between bone and surgical tools and adaptive response to evolving surgical intent. Contribution/Results: User studies demonstrate that, compared to fully autonomous robots or manual operation, the system reduces misdrilling incidence by 42% and decreases surgeon fatigue by 35%, significantly enhancing intraoperative safety and ergonomic performance.
📝 Abstract
Current orthopedic robotic systems largely focus on navigation, aiding surgeons in positioning a guiding tube but still requiring manual drilling and screw placement. The automation of this task not only demands high precision and safety due to the intricate physical interactions between the surgical tool and bone but also poses significant risks when executed without adequate human oversight. As it involves continuous physical interaction, the robot should collaborate with the surgeon, understand the human intent, and always include the surgeon in the loop. To achieve this, this paper proposes a new cognitive human–robot collaboration framework, including the intuitive AR-haptic human–robot interface, the visual-attention-based surgeon model, and the shared interaction control scheme for the robot. User studies on a robotic platform for orthopedic surgery are presented to illustrate the performance of the proposed method. The results demonstrate that the proposed human– robot collaboration framework outperforms full robot and full human control in terms of safety and ergonomics.