SigmaCollab: An Application-Driven Dataset for Physically Situated Collaboration

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of embodied human-AI collaboration in physical spaces within mixed reality (MR). To overcome limitations of existing datasets—namely, the absence of real-world procedural tasks, multimodal sensory data, and fine-grained interaction annotations—we introduce the first application-driven dataset for physical-world human-AI collaboration in MR. It comprises first-person videos, depth maps, eye-tracking, hand/head poses, and dual-channel audio from 85 untrained users performing tasks under guidance from an MR AI agent, supplemented with meticulous post-hoc annotations. The dataset totals 14 hours of high-quality, temporally aligned, multimodal collaboration data. It is the first to systematically characterize the complexity of multimodal interactions in authentic MR-assisted task execution, establishing a new benchmark for embodied collaboration modeling and evaluation. The dataset and code are publicly released, and it serves as foundational infrastructure for developing future evaluation benchmarks for human-AI collaboration in physical environments.

Technology Category

Application Category

📝 Abstract
We introduce SigmaCollab, a dataset enabling research on physically situated human-AI collaboration. The dataset consists of a set of 85 sessions in which untrained participants were guided by a mixed-reality assistive AI agent in performing procedural tasks in the physical world. SigmaCollab includes a set of rich, multimodal data streams, such as the participant and system audio, egocentric camera views from the head-mounted device, depth maps, head, hand and gaze tracking information, as well as additional annotations performed post-hoc. While the dataset is relatively small in size (~ 14 hours), its application-driven and interactive nature brings to the fore novel research challenges for human-AI collaboration, and provides more realistic testing grounds for various AI models operating in this space. In future work, we plan to use the dataset to construct a set of benchmarks for physically situated collaboration in mixed-reality task assistive scenarios. SigmaCollab is available at https://github.com/microsoft/SigmaCollab.
Problem

Research questions and friction points this paper is trying to address.

Developing multimodal dataset for physically situated human-AI collaboration research
Addressing novel challenges in mixed-reality task assistance scenarios
Creating benchmarks for AI models in interactive physical environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset for physically situated human-AI collaboration
Multimodal data streams from mixed-reality AI assistance
Benchmarks for mixed-reality task assistive scenarios
🔎 Similar Papers
No similar papers found.
D
D. Bohus
Microsoft Research, Redmond, WA, USA
Sean Andrist
Sean Andrist
Microsoft Research
Situated InteractionHuman-Robot InteractionIntelligent Virtual AgentsHuman-Computer InteractionAI
A
Ann Paradiso
Microsoft Research, Redmond, WA, USA
N
Nick Saw
Microsoft Research, Redmond, WA, USA
T
Tim Schoonbeek
Eindhoven University of Technology, Eindhoven, Netherlands
Maia Stiber
Maia Stiber
Microsoft Research
RoboticsHuman-Robot Interaction