TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech tokenizers rely on multi-layer residual vector quantization, high frame rates, or auxiliary pre-trained models, necessitating two-stage training and suffering from a fundamental disconnect between reconstruction fidelity and generative capability. This work proposes TaDiCodec—the first text-aware, end-to-end diffusion-based speech codec—which integrates textual conditioning directly into the diffusion autoencoder’s decoding process and achieves speech discretization using only a single codebook layer. Built upon a diffusion Transformer architecture, TaDiCodec supports unified one-stage training without semantic distillation or auxiliary pre-training. Evaluated on 24 kHz speech, it operates at an ultra-low frame rate of 6.25 Hz and bitrate of 0.0875 kbps, while significantly outperforming baselines in WER, speaker similarity (SIM), and UTMOS. TaDiCodec effectively bridges the reconstruction–generation gap, establishing a new paradigm for speech language modeling that is efficient, lightweight, and zero-shot compatible.

Technology Category

Application Category

📝 Abstract
Speech tokenizers serve as foundational components for speech language models, yet current designs exhibit several limitations, including: 1) dependence on multi-layer residual vector quantization structures or high frame rates, 2) reliance on auxiliary pre-trained models for semantic distillation, and 3) requirements for complex two-stage training processes. In this work, we introduce the Text-aware Diffusion Transformer Speech Codec (TaDiCodec), a novel approach designed to overcome these challenges. TaDiCodec employs end-to-end optimization for quantization and reconstruction through a diffusion autoencoder, while integrating text guidance into the diffusion decoder to enhance reconstruction quality and achieve optimal compression. TaDiCodec achieves an extremely low frame rate of 6.25 Hz and a corresponding bitrate of 0.0875 kbps with a single-layer codebook for 24 kHz speech, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS). Notably, TaDiCodec employs a single-stage, end-to-end training paradigm, and obviating the need for auxiliary pre-trained models. We also validate the compatibility of TaDiCodec in language model based zero-shot text-to-speech with both autoregressive modeling and masked generative modeling, demonstrating its effectiveness and efficiency for speech language modeling, as well as a significantly small reconstruction-generation gap. We will open source our code and model checkpoints. Audio samples are are available at https:/tadicodec.github.io/. We release code and model checkpoints at https:/github.com/HeCheng0625/Diffusion-Speech-Tokenizer.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations in speech tokenizer designs
Integrating text guidance to enhance reconstruction quality
Achieving low frame rates with single-stage training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Employs end-to-end optimization via diffusion autoencoder
Integrates text guidance into diffusion decoder
Uses single-stage training without auxiliary models
🔎 Similar Papers
No similar papers found.