🤖 AI Summary
Speech emotion recognition (SER) faces dual challenges: scarcity of labeled data and high computational costs associated with multimodal modeling. To address these, we propose LightweightSER—a cross-modal knowledge distillation framework that leverages unlabeled audio-visual data to enhance a lightweight student model. Specifically, a pretrained large-scale audio-visual teacher model serves as the knowledge source, enabling joint distillation of speech representations and facial expression semantics to achieve cross-modal feature alignment and effective knowledge transfer. Integrated model compression techniques further ensure efficiency without compromising accuracy. Experiments on RAVDESS and CREMA-D demonstrate that LightweightSER reduces dependency on labeled data substantially—achieving 95% of fully supervised baseline performance using only 10% labeled samples—while accelerating inference by 3.2×. This work establishes a novel paradigm for practical SER deployment in low-resource settings.
📝 Abstract
Voice interfaces integral to the human-computer interaction systems can benefit from speech emotion recognition (SER) to customize responses based on user emotions. Since humans convey emotions through multi-modal audio-visual cues, developing SER systems using both the modalities is beneficial. However, collecting a vast amount of labeled data for their development is expensive. This paper proposes a knowledge distillation framework called LightweightSER (LiSER) that leverages unlabeled audio-visual data for SER, using large teacher models built on advanced speech and face representation models. LiSER transfers knowledge regarding speech emotions and facial expressions from the teacher models to lightweight student models. Experiments conducted on two benchmark datasets, RAVDESS and CREMA-D, demonstrate that LiSER can reduce the dependence on extensive labeled datasets for SER tasks.