🤖 AI Summary
Scalable, high-resolution, multimodal synchronous acquisition tools are lacking in educational settings, hindering the practical deployment of learning analytics. To address this, we introduce Watch-DMLT and ViSeDOPS—two integrated systems enabling, for the first time, real-time, multi-user physiological (heart rate) and motion sensing via Fitbit Sense 2 smartwatches, synchronized with eye-tracking, video, and contextual annotations at millisecond-level temporal precision, supported by interactive visualization. The system was end-to-end deployed in classroom oral presentation tasks involving 65 students, demonstrating feasibility and efficacy for fine-grained, scalable learning analytics in authentic educational environments. Key contributions include: (1) the first classroom-scale, non-intrusive, multi-user multimodal synchronization framework achieving high temporal fidelity; and (2) an open-source toolchain and web-based visualization dashboard that substantially lowers technical barriers for multimodal educational research.
📝 Abstract
Wearable sensors, such as smartwatches, have become increasingly prevalent across domains like healthcare, sports, and education, enabling continuous monitoring of physiological and behavioral data. In the context of education, these technologies offer new opportunities to study cognitive and affective processes such as engagement, attention, and performance. However, the lack of scalable, synchronized, and high-resolution tools for multimodal data acquisition continues to be a significant barrier to the widespread adoption of Multimodal Learning Analytics in real-world educational settings. This paper presents two complementary tools developed to address these challenges: Watch-DMLT, a data acquisition application for Fitbit Sense 2 smartwatches that enables real-time, multi-user monitoring of physiological and motion signals; and ViSeDOPS, a dashboard-based visualization system for analyzing synchronized multimodal data collected during oral presentations. We report on a classroom deployment involving 65 students and up to 16 smartwatches, where data streams including heart rate, motion, gaze, video, and contextual annotations were captured and analyzed. Results demonstrate the feasibility and utility of the proposed system for supporting fine-grained, scalable, and interpretable Multimodal Learning Analytics in real learning environments.