🤖 AI Summary
This study addresses the challenge of predicting high-engagement segments in unedited classroom recordings to enhance teaching resource accessibility and learning outcomes. We propose a language-agnostic, annotation-free approach that leverages purely non-semantic multimodal features—teacher pose (estimated via OpenPose), audio Mel spectrograms, and slide-turning signals—fused through a lightweight multi-branch neural network incorporating optical flow and sliding-window temporal modeling. The method operates end-to-end in realistic classroom settings: no eye-tracking equipment, limited labeled data, and zero post-processing. Our key contribution is the first pure non-semantic multimodal fusion framework, explicitly designed for computational efficiency and practical deployment. Evaluated via 7-fold cross-validation, the model achieves a Pearson correlation coefficient of 0.514 for engagement intensity prediction and 69.3% accuracy for three-class engagement classification—both significantly outperforming established baselines.
📝 Abstract
This study proposes a multimodal neural network-based approach to predict segment access frequency in lecture archives. These archives, widely used as supplementary resources in modern education, often consist of long, unedited recordings that make it difficult to keep students engaged. Captured directly from face-to-face lectures without post-processing, they lack visual appeal. Meanwhile, the increasing volume of recorded material renders manual editing and annotation impractical. Automatically detecting high-engagement segments is thus crucial for improving accessibility and maintaining learning effectiveness. Our research focuses on real classroom lecture archives, characterized by unedited footage, no additional hardware (e.g., eye-tracking), and limited student numbers. We approximate student engagement using segment access frequency as a proxy. Our model integrates multimodal features from teachers' actions (via OpenPose and optical flow), audio spectrograms, and slide page progression. These features are deliberately chosen for their non-semantic nature, making the approach applicable regardless of lecture language. Experiments show that our best model achieves a Pearson correlation of 0.5143 in 7-fold cross-validation and 69.32 percent average accuracy in a downstream three-class classification task. The results, obtained with high computational efficiency and a small dataset, demonstrate the practical feasibility of our system in real-world educational contexts.