Medical Vision Language Models as Policies for Robotic Surgery

๐Ÿ“… 2025-10-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Vision-based proximal policy optimization (PPO) for laparoscopic surgery faces challenges including high-dimensional image inputs, sparse reward signals, and difficulty in extracting task-relevant features. Method: This work introduces MedFlamingoโ€”a medical-domain pretrained vision-language model (VLM)โ€”into the PPO policy network for the first time. Leveraging its medical prior knowledge, MedFlamingo generates high-level planning tokens, enabling end-to-end learning from raw laparoscopic video streams and textual instructions within the LapGym simulator. Contribution/Results: Evaluated on five laparoscopic tasks, the approach achieves an average success rate exceeding 70%, outperforming baseline methods by 66.7%โ€“1114.3% and significantly accelerating policy convergence. This study pioneers the integration of domain-specific VLMs into robotic surgical reinforcement learning, effectively bridging low-level perception and high-level semantic decision-making.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-based Proximal Policy Optimization (PPO) struggles with visual observation-based robotic laparoscopic surgical tasks due to the high-dimensional nature of visual input, the sparsity of rewards in surgical environments, and the difficulty of extracting task-relevant features from raw visual data. We introduce a simple approach integrating MedFlamingo, a medical domain-specific Vision-Language Model, with PPO. Our method is evaluated on five diverse laparoscopic surgery task environments in LapGym, using only endoscopic visual observations. MedFlamingo PPO outperforms and converges faster compared to both standard vision-based PPO and OpenFlamingo PPO baselines, achieving task success rates exceeding 70% across all environments, with improvements ranging from 66.67% to 1114.29% compared to baseline. By processing task observations and instructions once per episode to generate high-level planning tokens, our method efficiently combines medical expertise with real-time visual feedback. Our results highlight the value of specialized medical knowledge in robotic surgical planning and decision-making.
Problem

Research questions and friction points this paper is trying to address.

Addresses high-dimensional visual input challenges in robotic surgery
Solves sparse reward issues in surgical task environments
Overcomes difficulty extracting task-relevant features from visual data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates medical vision-language model with reinforcement learning
Generates planning tokens from observations and instructions
Combines medical expertise with real-time visual feedback
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Akshay Muppidi
Department of Computer Science, Stony Brook University, Stony Brook, USA
Martin Radfar
Martin Radfar
Unknown affiliation