π€ AI Summary
Existing neural speech codecs suffer from degraded reconstruction quality and poor attribute disentanglement at low bitrates (i.e., few discrete tokens), primarily due to the inefficiency of conventional residual vector quantization in modeling the strong entanglement among timbre, prosody, and linguistic content in speech. To address this, we propose the first ternary disentangled representation framework: a global timbre vector, a long-stride prosody encoder, and a content-specialized encoderβeach optimized with component-specific training objectives and a discrete token reconstruction strategy. Experiments demonstrate that our method achieves state-of-the-art performance across PESQ, STOI, and Disentanglement-Completeness-Information (DCI) metrics. Notably, it maintains high-fidelity reconstruction even when reducing token count by 30% at equivalent bitrates, significantly improving both speech quality and controllability under low-bitrate constraints.
π Abstract
Neural speech codecs have gained great attention for their outstanding reconstruction with discrete token representations. It is a crucial component in generative tasks such as speech coding and large language models (LLM). However, most works based on residual vector quantization perform worse with fewer tokens due to low coding efficiency for modeling complex coupled information. In this paper, we propose a neural speech codec named FreeCodec which employs a more effective encoding framework by decomposing intrinsic properties of speech into different components: 1) a global vector is extracted as the timbre information, 2) a prosody encoder with a long stride level is used to model the prosody information, 3) the content information is from a content encoder. Using different training strategies, FreeCodec achieves state-of-the-art performance in reconstruction and disentanglement scenarios. Results from subjective and objective experiments demonstrate that our framework outperforms existing methods.