MM-OR: A Large Multimodal Operating Room Dataset for Semantic Understanding of High-Intensity Surgical Environments

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current operating room (OR) datasets suffer from limited scale, low fidelity, and single-modality constraints, hindering progress in intelligent OR modeling. To address this, we introduce MM-OR—the first large-scale, high-fidelity, multimodal spatiotemporal OR dataset comprising over 100,000 frames, integrating RGB-D video, audio, speech transcripts, robotic logs, and 3D pose trajectories. It further provides panoramic segmentation masks, semantic scene graphs, and annotations for diverse downstream tasks. We propose the first OR-specific multimodal scene graph generation paradigm and design MM2SG, a dedicated multimodal large vision-language model. MM2SG achieves significant improvements over unimodal baselines on cross-modal reasoning and scene graph generation. This work establishes a new benchmark for holistic OR understanding, with all code and data publicly released.

Technology Category

Application Category

📝 Abstract
Operating rooms (ORs) are complex, high-stakes environments requiring precise understanding of interactions among medical staff, tools, and equipment for enhancing surgical assistance, situational awareness, and patient safety. Current datasets fall short in scale, realism and do not capture the multimodal nature of OR scenes, limiting progress in OR modeling. To this end, we introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR dataset, and the first dataset to enable multimodal scene graph generation. MM-OR captures comprehensive OR scenes containing RGB-D data, detail views, audio, speech transcripts, robotic logs, and tracking data and is annotated with panoptic segmentations, semantic scene graphs, and downstream task labels. Further, we propose MM2SG, the first multimodal large vision-language model for scene graph generation, and through extensive experiments, demonstrate its ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG establish a new benchmark for holistic OR understanding, and open the path towards multimodal scene analysis in complex, high-stakes environments. Our code, and data is available at https://github.com/egeozsoy/MM-OR.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale, realistic multimodal datasets for operating room understanding.
Need for precise interaction modeling among staff, tools, and equipment in ORs.
Limited progress in multimodal scene graph generation for high-stakes environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale multimodal OR dataset MM-OR
Multimodal scene graph generation model MM2SG
Comprehensive OR scene capture with diverse data
🔎 Similar Papers
No similar papers found.