🤖 AI Summary
Current speech large language models (speech-LLMs) exhibit significant limitations in contextual reasoning and paralinguistic understanding (e.g., emotion, attitude), primarily due to the absence of a realistic, speech-based question-answering (QA) benchmark that jointly evaluates both capabilities. Method: We propose the first context-aware paralinguistic QA (CPQA) framework for real-world speech, featuring two key innovations: (1) pseudo-paralinguistic label–driven speech data compression, and (2) LLM-guided multi-turn contextual QA generation. Contribution/Results: We introduce the first high-quality, speech–semantics aligned CPQA benchmark explicitly designed for empathic reasoning evaluation. Experiments show strong agreement between generated and human annotations (Pearson’s *r* > 0.89). Fine-tuning Qwen2-Audio-7B-Instruct on our data yields substantial gains in empathic reasoning performance, demonstrating the framework’s effectiveness and its capacity to enhance model robustness.
📝 Abstract
Current speech-LLMs exhibit limited capability in contextual reasoning alongside paralinguistic understanding, primarily due to the lack of Question-Answer (QA) datasets that cover both aspects. We propose a novel framework for dataset generation from in-the-wild speech data, that integrates contextual reasoning with paralinguistic information. It consists of a pseudo paralinguistic label-based data condensation of in-the-wild speech and LLM-based Contextual Paralinguistic QA (CPQA) generation. The effectiveness is validated by a strong correlation in evaluations of the Qwen2-Audio-7B-Instruct model on a dataset created by our framework and human-generated CPQA dataset. The results also reveal the speech-LLM's limitations in handling empathetic reasoning tasks, highlighting the need for such datasets and more robust models. The proposed framework is first of its kind and has potential in training more robust speech-LLMs with paralinguistic reasoning capabilities.