🤖 AI Summary
Existing audio codecs suffer from severely constrained embedding space at high compression rates (e.g., <3 kbps), leading to sharp fidelity degradation. This paper proposes a universal high-fidelity neural audio codec applicable to speech, music, and general audio. Our method addresses the embedding bottleneck via three key innovations: (1) Residual Expert Vector Quantization (REVQ), which substantially expands effective embedding capacity through hierarchical residual quantization with expert routing; (2) an STFT-domain adversarial discriminator that enforces spectral realism and perceptual fidelity; and (3) sparse quantization coupled with spatially efficient embedding utilization to optimize the bitrate–quality trade-off. Experiments demonstrate state-of-the-art performance at ≤3 kbps, achieving significant gains in MOS, STOI, and ESTOI. Ablation studies confirm the distinct and complementary contributions of each component.
📝 Abstract
We present a universal high-fidelity neural audio compression algorithm that can compress speech, music, and general audio below 3 kbps bandwidth. Although current state-of-the-art audio codecs excel in audio compression, their effectiveness significantly declines when embedding space is sharply reduced, which corresponds to higher compression. To address this problem, we propose Residual Experts Vector Quantization (REVQ), which significantly expands the available embedding space and improves the performance while hardly sacrificing the bandwidth. Furthermore, we introduce a strategy to ensure that the vast embedding space can be fully utilized. Additionally, we propose a STFT-based discriminator to guide the generator in producing indistinguishable spectrograms. We demonstrate that the proposed approach outperforms baseline methods through detailed ablations.