🤖 AI Summary
Existing systems struggle to support strict audio-visual synchronization, limiting the analysis of fine-grained temporal features in dialogue such as turn-taking, overlapping speech, and prosody. To address this challenge, this work proposes an end-to-end multimodal acquisition and calibration framework that treats synchronized audio and video as equally central modalities for the first time. By integrating a multi-camera array with multi-channel microphones under a unified temporal architecture, the system enables scalable, reproducible, high-quality recording. Standardized calibration and quality control procedures ensure high temporal consistency across modalities, yielding data that effectively supports fine-grained analysis of conversational behavior and data-driven modeling.
📝 Abstract
Multi-view capture systems have been an important tool in research for recording human motion under controlling conditions. Most existing systems are specified around video streams and provide little or no support for audio acquisition and rigorous audio-video alignment, despite both being essential for studying conversational interaction where timing at the level of turn-taking, overlap, and prosody matters. In this technical report, we describe an audio-visual multi-view capture system that addresses this gap by treating synchronized audio and synchronized video as first-class signals. The system combines a multi-camera pipeline with multi-channel microphone recording under a unified timing architecture and provides a practical workflow for calibration, acquisition, and quality control that supports repeatable recordings at scale. We quantify synchronization performance in deployment and show that the resulting recordings are temporally consistent enough to support fine-grained analysis and data-driven modeling of conversation behavior.