Speech-IFEval: Evaluating Instruction-Following and Quantifying Catastrophic Forgetting in Speech-Aware Language Models

๐Ÿ“… 2025-05-25
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Speech-language models (SLMs) suffer from catastrophic forgetting of text-based instruction-following capabilities due to speech-centric pretraining, yet existing benchmarks conflate speech understanding with instruction execution, hindering isolated evaluation. Method: We introduce the first standardized, decoupled evaluation framework for SLMsโ€”comprising a multi-dimensional instruction test suite, prompt robustness analysis, cross-model comparison, and quantitative forgetting metrics. Contribution/Results: Experiments reveal that mainstream SLMs underperform significantly relative to pure-text LLMs on fundamental instruction tasks; most fail to reliably execute even simple instructions and exhibit high sensitivity to minor prompt perturbations. Prompt tuning further degrades output consistency. This work is the first to systematically diagnose and quantify instruction-following deficits in SLMs, providing a reproducible benchmark and actionable insights for model improvement.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce Speech-IFeval, an evaluation framework designed to assess instruction-following capabilities and quantify catastrophic forgetting in speech-aware language models (SLMs). Recent SLMs integrate speech perception with large language models (LLMs), often degrading textual capabilities due to speech-centric training. Existing benchmarks conflate speech perception with instruction-following, hindering evaluation of these distinct skills. To address this gap, we provide a benchmark for diagnosing the instruction-following abilities of SLMs. Our findings show that most SLMs struggle with even basic instructions, performing far worse than text-based LLMs. Additionally, these models are highly sensitive to prompt variations, often yielding inconsistent and unreliable outputs. We highlight core challenges and provide insights to guide future research, emphasizing the need for evaluation beyond task-level metrics.
Problem

Research questions and friction points this paper is trying to address.

Assessing instruction-following in speech-aware language models
Quantifying catastrophic forgetting in speech-text integrated models
Addressing unreliable outputs due to prompt sensitivity in SLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates instruction-following in speech-aware models
Quantifies catastrophic forgetting in SLMs
Benchmark for diagnosing SLM instruction abilities
๐Ÿ”Ž Similar Papers
No similar papers found.