๐ค AI Summary
Speech-language models (SLMs) suffer from catastrophic forgetting of text-based instruction-following capabilities due to speech-centric pretraining, yet existing benchmarks conflate speech understanding with instruction execution, hindering isolated evaluation. Method: We introduce the first standardized, decoupled evaluation framework for SLMsโcomprising a multi-dimensional instruction test suite, prompt robustness analysis, cross-model comparison, and quantitative forgetting metrics. Contribution/Results: Experiments reveal that mainstream SLMs underperform significantly relative to pure-text LLMs on fundamental instruction tasks; most fail to reliably execute even simple instructions and exhibit high sensitivity to minor prompt perturbations. Prompt tuning further degrades output consistency. This work is the first to systematically diagnose and quantify instruction-following deficits in SLMs, providing a reproducible benchmark and actionable insights for model improvement.
๐ Abstract
We introduce Speech-IFeval, an evaluation framework designed to assess instruction-following capabilities and quantify catastrophic forgetting in speech-aware language models (SLMs). Recent SLMs integrate speech perception with large language models (LLMs), often degrading textual capabilities due to speech-centric training. Existing benchmarks conflate speech perception with instruction-following, hindering evaluation of these distinct skills. To address this gap, we provide a benchmark for diagnosing the instruction-following abilities of SLMs. Our findings show that most SLMs struggle with even basic instructions, performing far worse than text-based LLMs. Additionally, these models are highly sensitive to prompt variations, often yielding inconsistent and unreliable outputs. We highlight core challenges and provide insights to guide future research, emphasizing the need for evaluation beyond task-level metrics.