π€ AI Summary
This study addresses the challenge of non-intrusively identifying individualsβ real-time cognitive-affective states in collaborative learning settings. We propose a video-assisted self-report framework grounded in retrospective cue-based recall (RCR), integrating a predefined cognitive-affective state taxonomy with prompt-guided annotation to minimize disruption to natural collaboration. By comparing label frequency and temporal distributions across reporting modalities, we uncover dynamic patterns in emotion and attention evolution during group interaction. Our approach significantly enhances ecological validity and scalability of state annotation, offering a high-fidelity, low-interference paradigm for educational data mining. It enables fine-grained modeling and timely intervention in adaptive learning systems.
π Abstract
Identification of affective and attentional states of individuals within groups is difficult to obtain without disrupting the natural flow of collaboration. Recent work from our group used a retrospect cued recall paradigm where participants spoke about their cognitive-affective states while they viewed videos of their groups. We then collected additional participants where their reports were constrained to a subset of pre-identified cognitive-affective states. In this latter case, participants either self reported or reported in response to probes. Here, we present an initial analysis of the frequency and temporal distribution of participant reports, and how the distributions of labels changed across the two collections. Our approach has implications for the educational data mining community in tracking cognitive-affective states in collaborative learning more effectively and in developing improved adaptive learning systems that can detect and respond to cognitive-affective states.