Self-Supervised Speech Quality Assessment (S3QA): Leveraging Speech Foundation Models for a Scalable Speech Quality Metric

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Subjective Mean Opinion Score (MOS) ratings hinder scalable, generalizable speech quality assessment due to high annotation cost and poor cross-dataset transferability. Method: We propose a reference-free, self-supervised paradigm: (i) synthesizing paired speech samples with diverse acoustic degradations (e.g., noise, reverberation); (ii) extracting representations using WavLM; (iii) leveraging cosine similarity between clean-degraded pairs as self-supervised signal; and (iv) training a Transformer-based regressor to predict degradation severity. Contribution/Results: This work pioneers the integration of speech foundation models with self-supervised contrastive learning—eliminating reliance on MOS entirely. Experiments demonstrate strong correlation with MOS (ρ > 0.92) on unseen benchmarks (NISQA, VOiCES), high consistency with ASR performance and physical acoustic parameters (e.g., microphone distance), and robust cross-corpus generalization.

Technology Category

Application Category

📝 Abstract
Methods for automatically assessing speech quality are critical for many human language technologies. Behavioral ratings provided by human raters (e.g., mean opinion scores; MOS) are considered the gold standard, but they are susceptible to variability between individual raters, cannot easily be generalized across corpora, and are labor-intensive to collect, thus limiting the acoustic challenges they can quantify. Here, we present a new, scalable method for automatically assessing speech quality: the self-supervised speech quality assessment (S3QA) model. First, we processed high quality utterances from multiple speech corpora, using a wide range of acoustic manipulations intended to emulate common sources of quality degradation in the real-world: frequency filtering, reverberation, background noise, and digital compression. Second, we leveraged an existing, pre-trained speech foundation model, WavLM, to computationally derive a self-supervised training target for the level of signal degradation by calculating the cosine distances between the clean and degraded versions of each utterance in the embedding space. Next, we trained a transformer-based model to predict the cosine distance, or degradation index, given only the degraded versions of these utterances. Finally, the trained model was evaluated on unseen test corpora of synthetic mixtures, NISQA, and VOiCES. We show that the S3QA model trained on this task performs well and is aligned with both behavioral ratings (MOS), speech technology performance (automatic speech recognition) and other important features of the held-out data (e.g., microphone distances). This approach provides an automated, scalable method for assessing speech quality across a wide range of acoustic challenges, and could easily be adapted to other use cases where acoustic simulations are available.
Problem

Research questions and friction points this paper is trying to address.

Automating speech quality assessment without human raters
Overcoming variability and labor costs in MOS ratings
Scaling quality metrics across diverse acoustic challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages WavLM for self-supervised training
Uses transformer to predict degradation index
Automates scalable speech quality assessment
🔎 Similar Papers
No similar papers found.
Mattson Ogg
Mattson Ogg
Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory
self-supervised learninghuman-AI alignmentBCIsound recognition
Caitlyn Bishop
Caitlyn Bishop
Johns Hopkins University Applied Physics Laboratory
H
Han Yi
Johns Hopkins University Applied Physics Laboratory, Laurel, USA
S
Sarah Robinson
Johns Hopkins University Applied Physics Laboratory, Laurel, USA