🤖 AI Summary
Evaluation of speech foundation models is fragmented: models exhibit heterogeneous performance across tasks, and existing evaluation protocols lack standardized criteria, impeding systematic alignment between model capabilities and task requirements.
Method: We propose the first three-dimensional evaluation taxonomy for speech foundation models, orthogonalizing assessment dimensions, model capabilities, and task requirements to systematically categorize and map prevailing evaluation methodologies.
Contribution/Results: Our framework reveals structural gaps in current benchmarks—particularly in prosody modeling, interactivity, and reasoning—providing theoretical grounding and actionable pathways for benchmark design. Empirical analysis demonstrates that the taxonomy enables precise alignment between model characteristics (e.g., representation learning, generation, or dialogue) and optimal evaluation protocols. It further identifies critical directions for future evaluation development, including enhanced coverage of interactive and reasoning-intensive speech tasks.
📝 Abstract
Speech foundation models have recently achieved remarkable capabilities across a wide range of tasks. However, their evaluation remains disjointed across tasks and model types. Different models excel at distinct aspects of speech processing and thus require different evaluation protocols. This paper proposes a unified taxonomy that addresses the question: Which evaluation is appropriate for which model? The taxonomy defines three orthogonal axes: the extbf{evaluation aspect} being measured, the model capabilities required to attempt the task, and the task or protocol requirements needed to perform it. We classify a broad set of existing evaluations and benchmarks along these axes, spanning areas such as representation learning, speech generation, and interactive dialogue. By mapping each evaluation to the capabilities a model exposes (e.g., speech generation, real-time processing) and to its methodological demands (e.g., fine-tuning data, human judgment), the taxonomy provides a principled framework for aligning models with suitable evaluation methods. It also reveals systematic gaps, such as limited coverage of prosody, interaction, or reasoning, that highlight priorities for future benchmark design. Overall, this work offers a conceptual foundation and practical guide for selecting, interpreting, and extending evaluations of speech models.