Contextual Paralinguistic Data Creation for Multi-Modal Speech-LLM: Data Condensation and Spoken QA Generation

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current speech large language models (speech-LLMs) exhibit significant limitations in contextual reasoning and paralinguistic understanding (e.g., emotion, attitude), primarily due to the absence of a realistic, speech-based question-answering (QA) benchmark that jointly evaluates both capabilities. Method: We propose the first context-aware paralinguistic QA (CPQA) framework for real-world speech, featuring two key innovations: (1) pseudo-paralinguistic label–driven speech data compression, and (2) LLM-guided multi-turn contextual QA generation. Contribution/Results: We introduce the first high-quality, speech–semantics aligned CPQA benchmark explicitly designed for empathic reasoning evaluation. Experiments show strong agreement between generated and human annotations (Pearson’s *r* > 0.89). Fine-tuning Qwen2-Audio-7B-Instruct on our data yields substantial gains in empathic reasoning performance, demonstrating the framework’s effectiveness and its capacity to enhance model robustness.

Technology Category

Application Category

📝 Abstract
Current speech-LLMs exhibit limited capability in contextual reasoning alongside paralinguistic understanding, primarily due to the lack of Question-Answer (QA) datasets that cover both aspects. We propose a novel framework for dataset generation from in-the-wild speech data, that integrates contextual reasoning with paralinguistic information. It consists of a pseudo paralinguistic label-based data condensation of in-the-wild speech and LLM-based Contextual Paralinguistic QA (CPQA) generation. The effectiveness is validated by a strong correlation in evaluations of the Qwen2-Audio-7B-Instruct model on a dataset created by our framework and human-generated CPQA dataset. The results also reveal the speech-LLM's limitations in handling empathetic reasoning tasks, highlighting the need for such datasets and more robust models. The proposed framework is first of its kind and has potential in training more robust speech-LLMs with paralinguistic reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Lack of QA datasets combining contextual reasoning and paralinguistic understanding
Need for robust speech-LLMs with paralinguistic reasoning capabilities
Challenges in handling empathetic reasoning tasks in speech-LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo paralinguistic label-based data condensation
LLM-based Contextual Paralinguistic QA generation
Multi-modal speech-LLM dataset creation framework
🔎 Similar Papers
No similar papers found.
Qiongqiong Wang
Qiongqiong Wang
Lead Research Engineer, Institute for Infocomm Research, A*STAR, Singapore
Deep LearningArtificial IntelligenceMachine Learning
H
Hardik B. Sailor
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A ⋆STAR), Singapore
Tianchi Liu
Tianchi Liu
Tencent, Singapore; Ph.D. @ National University of Singapore; Ex-A*STAR, Singapore
Text-to-SpeechSpeech-LLMSpeaker VerificationAnti-spoofingDeepfake Detection
A
Ai Ti Aw
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A ⋆STAR), Singapore