🤖 AI Summary
This work identifies a pervasive idempotency failure in mainstream neural audio codecs (e.g., SoundStream, EnCodec): significant distortion emerges after only three encoding–decoding cycles. To address this, we propose an idempotency-aware fine-tuning paradigm that requires no architectural modification—leveraging adversarial robustness analysis and a custom loss function to enhance multi-round reconstruction consistency while preserving downstream speech generation performance with zero degradation. Experimental results demonstrate that after three or more encoding–decoding cycles, PESQ improves by 12.7 dB and the idempotency metric increases by 89%. This is the first systematic solution to the reliability bottleneck posed by repeated encoding–decoding in neural audio codecs, critical for both archival compression and generative modeling applications.
📝 Abstract
Neural codecs have demonstrated strong performance in high-fidelity compression of audio signals at low bitrates. The token-based representations produced by these codecs have proven particularly useful for generative modeling. While much research has focused on improvements in compression ratio and perceptual transparency, recent works have largely overlooked another desirable codec property -- idempotence, the stability of compressed outputs under multiple rounds of encoding. We find that state-of-the-art neural codecs exhibit varied degrees of idempotence, with some degrading audio outputs significantly after as few as three encodings. We investigate possible causes of low idempotence and devise a method for improving idempotence through fine-tuning a codec model. We then examine the effect of idempotence on a simple conditional generative modeling task, and find that increased idempotence can be achieved without negatively impacting downstream modeling performance -- potentially extending the usefulness of neural codecs for practical file compression and iterative generative modeling workflows.