🤖 AI Summary
Subjective bias arising from inter-rater scale variability hinders performance in speech quality assessment (SQA) and continuous speech emotion recognition (CSER). Method: We propose a unified listener-scale modeling framework based on pairwise comparison learning, which abandons conventional mean aggregation or per-rater scale modeling. Instead, it explicitly learns a shared scale representation by leveraging ordinal relationships among sentence-level relative ratings, thereby enforcing cross-rater comparability. The model is trained end-to-end via comparison learning to preserve subjectivity while mitigating scale-induced bias. Contribution/Results: The approach significantly improves generalization across listeners and tasks. Experiments demonstrate state-of-the-art performance on both SQA and CSER benchmark datasets, validating its effectiveness, robustness, and task-agnostic applicability.
📝 Abstract
Speech Quality Assessment (SQA) and Continuous Speech Emotion Recognition (CSER) are two key tasks in speech technology, both relying on listener ratings. However, these ratings are inherently biased due to individual listener factors. Previous approaches have introduced a mean listener scoring scale and modeled all listener scoring scales in the training set. However, the mean listener approach is prone to distortion from averaging ordinal data, leading to potential biases. Moreover, learning multiple listener scoring scales while inferring based only on the mean listener scale limits effectiveness. In contrast, our method focuses on modeling a unified listener scoring scale, using comparison scores to correctly capture the scoring relationships between utterances. Experimental results show that our method effectively improves prediction performance in both SQA and CSER tasks, proving its effectiveness and robustness.