SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation benchmarks lack systematic assessment capabilities for multilingual spoken queries—particularly in low-resource languages, dialects, accents, and phonetic variations—within authentic conversational settings. Method: We introduce the first multilingual, culturally adapted spoken QA dataset for LLM evaluation, comprising 33,000 naturally occurring spoken question-answer pairs across 12 languages and multiple dialect variants. Our approach innovatively integrates phonetic variability modeling, cross-lingual semantic alignment, and fine-grained cultural context annotation, and proposes an end-to-end ASR-LLM joint evaluation paradigm. Contribution/Results: We publicly release the dataset and evaluation scripts, which empirically reveal substantial performance gaps between state-of-the-art ASR and LLM systems in spoken language understanding. This work establishes the first reproducible, culturally sensitive, speech-enhanced multilingual spoken QA benchmark for evaluating large language models.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various disciplines and tasks. However, benchmarking their capabilities with multilingual spoken queries remains largely unexplored. In this study, we introduce SpokenNativQA, the first multilingual and culturally aligned spoken question-answering (SQA) dataset designed to evaluate LLMs in real-world conversational settings. The dataset comprises approximately 33,000 naturally spoken questions and answers in multiple languages, including low-resource and dialect-rich languages, providing a robust benchmark for assessing LLM performance in speech-based interactions. SpokenNativQA addresses the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. We benchmark different ASR systems and LLMs for SQA and present our findings. We released the data at (https://huggingface.co/datasets/QCRI/SpokenNativQA) and the experimental scripts at (https://llmebench.qcri.org/) for the research community.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs with multilingual spoken queries is unexplored
Existing text-based QA datasets lack speech variability and diversity
Assessing LLM performance in real-world speech interactions is needed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual spoken QA dataset for LLMs
Includes low-resource and dialect-rich languages
Benchmarks ASR systems and LLMs