Leveraging Unlabeled Audio-Visual Data in Speech Emotion Recognition using Knowledge Distillation

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speech emotion recognition (SER) faces dual challenges: scarcity of labeled data and high computational costs associated with multimodal modeling. To address these, we propose LightweightSER—a cross-modal knowledge distillation framework that leverages unlabeled audio-visual data to enhance a lightweight student model. Specifically, a pretrained large-scale audio-visual teacher model serves as the knowledge source, enabling joint distillation of speech representations and facial expression semantics to achieve cross-modal feature alignment and effective knowledge transfer. Integrated model compression techniques further ensure efficiency without compromising accuracy. Experiments on RAVDESS and CREMA-D demonstrate that LightweightSER reduces dependency on labeled data substantially—achieving 95% of fully supervised baseline performance using only 10% labeled samples—while accelerating inference by 3.2×. This work establishes a novel paradigm for practical SER deployment in low-resource settings.

Technology Category

Application Category

📝 Abstract
Voice interfaces integral to the human-computer interaction systems can benefit from speech emotion recognition (SER) to customize responses based on user emotions. Since humans convey emotions through multi-modal audio-visual cues, developing SER systems using both the modalities is beneficial. However, collecting a vast amount of labeled data for their development is expensive. This paper proposes a knowledge distillation framework called LightweightSER (LiSER) that leverages unlabeled audio-visual data for SER, using large teacher models built on advanced speech and face representation models. LiSER transfers knowledge regarding speech emotions and facial expressions from the teacher models to lightweight student models. Experiments conducted on two benchmark datasets, RAVDESS and CREMA-D, demonstrate that LiSER can reduce the dependence on extensive labeled datasets for SER tasks.
Problem

Research questions and friction points this paper is trying to address.

Utilizing unlabeled audio-visual data for speech emotion recognition
Reducing reliance on large labeled datasets for SER
Distilling knowledge from teacher to lightweight student models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation for speech emotion recognition
Leveraging unlabeled audio-visual data
Lightweight models from teacher-student transfer
🔎 Similar Papers
No similar papers found.