🤖 AI Summary
Existing audio codec research lacks standardized definitions of semantic and acoustic tokens, and evaluation is typically confined to isolated tasks (e.g., reconstruction or ASR), hindering fair cross-model comparison.
Method: We propose precise, operationally defined semantic and acoustic tokenizations and introduce the first multidimensional evaluation framework tailored for large-model integration—assessing reconstruction fidelity, codebook index stability, decoder-only Transformer language modeling perplexity, and downstream probing task performance.
Contribution/Results: Through systematic empirical analysis, we uncover strong inter-dimensional correlations among the four metrics, validating both the theoretical soundness of our token definitions and the practical effectiveness of the evaluation framework. This work establishes the first standardized, reproducible, and multi-objective benchmark for audio tokenization—enabling holistic, comparable, and scalable assessment across diverse audio codecs.
📝 Abstract
Multimodal Large Language Models (MLLMs) have been widely applied in speech and music. This tendency has led to a focus on audio tokenization for Large Models (LMs). Unlike semantic-only text tokens, audio tokens must both capture global semantic content and preserve fine-grained acoustic details. Moreover, they provide a discrete method for speech and music that can be effectively integrated into MLLMs. However, existing research is unsuitable in the definitions of semantic tokens and acoustic tokens. In addition, the evaluation of different codecs typically concentrates on specific domains or tasks, such as reconstruction or Automatic Speech Recognition (ASR) task, which prevents fair and comprehensive comparisons. To address these problems, this paper provides suitable definitions for semantic and acoustic tokens and introduces a systematic evaluation framework. This framework allows for a comprehensive assessment of codecs' capabilities which evaluate across four dimensions: audio reconstruction metric, codebook index (ID) stability, decoder-only transformer perplexity, and performance on downstream probe tasks. Our results show the correctness of the provided suitable definitions and the correlation among reconstruction metrics, codebook ID stability, downstream probe tasks and perplexity.