AudioBench: A Universal Benchmark for Audio Large Language Models

📅 2024-06-23
🏛️ arXiv.org
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Existing Audio Large Language Models (AudioLLMs) lack a standardized, comprehensive benchmark for systematically evaluating instruction-following capabilities. Method: We introduce AudioBench—the first multidimensional evaluation benchmark specifically designed for AudioLLMs—covering three core task domains: speech understanding, acoustic scene understanding, and paralinguistic speech understanding. It integrates eight task categories across 26 datasets, including seven newly constructed ones. We formally define an instruction-following evaluation framework, propose a cross-modal instruction assessment protocol, and establish a unified metric system. Contribution/Results: We open-source the evaluation toolkit and a dynamic leaderboard. Comprehensive evaluation of five state-of-the-art AudioLLMs reveals significant capability imbalances across tasks. All data, code, and results are publicly released, establishing a new standard for AudioLLM capability assessment.

Technology Category

Application Category

📝 Abstract
We introduce AudioBench, a universal benchmark designed to evaluate Audio Large Language Models (AudioLLMs). It encompasses 8 distinct tasks and 26 datasets, among which, 7 are newly proposed datasets. The evaluation targets three main aspects: speech understanding, audio scene understanding, and voice understanding (paralinguistic). Despite recent advancements, there lacks a comprehensive benchmark for AudioLLMs on instruction following capabilities conditioned on audio signals. AudioBench addresses this gap by setting up datasets as well as desired evaluation metrics. Besides, we also evaluated the capabilities of five popular models and found that no single model excels consistently across all tasks. We outline the research outlook for AudioLLMs and anticipate that our open-sourced evaluation toolkit, data, and leaderboard will offer a robust testbed for future model developments.
Problem

Research questions and friction points this paper is trying to address.

Lack of comprehensive benchmark for AudioLLMs' instruction following
Need for evaluation metrics across diverse audio understanding tasks
No single model performs consistently well on all audio tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive benchmark for AudioLLMs evaluation
Includes 8 tasks and 26 diverse datasets
Open-sourced toolkit for future model development
🔎 Similar Papers
No similar papers found.