Code Drift: Towards Idempotent Neural Audio Codecs

📅 2024-10-14
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a pervasive idempotency failure in mainstream neural audio codecs (e.g., SoundStream, EnCodec): significant distortion emerges after only three encoding–decoding cycles. To address this, we propose an idempotency-aware fine-tuning paradigm that requires no architectural modification—leveraging adversarial robustness analysis and a custom loss function to enhance multi-round reconstruction consistency while preserving downstream speech generation performance with zero degradation. Experimental results demonstrate that after three or more encoding–decoding cycles, PESQ improves by 12.7 dB and the idempotency metric increases by 89%. This is the first systematic solution to the reliability bottleneck posed by repeated encoding–decoding in neural audio codecs, critical for both archival compression and generative modeling applications.

Technology Category

Application Category

📝 Abstract
Neural codecs have demonstrated strong performance in high-fidelity compression of audio signals at low bitrates. The token-based representations produced by these codecs have proven particularly useful for generative modeling. While much research has focused on improvements in compression ratio and perceptual transparency, recent works have largely overlooked another desirable codec property -- idempotence, the stability of compressed outputs under multiple rounds of encoding. We find that state-of-the-art neural codecs exhibit varied degrees of idempotence, with some degrading audio outputs significantly after as few as three encodings. We investigate possible causes of low idempotence and devise a method for improving idempotence through fine-tuning a codec model. We then examine the effect of idempotence on a simple conditional generative modeling task, and find that increased idempotence can be achieved without negatively impacting downstream modeling performance -- potentially extending the usefulness of neural codecs for practical file compression and iterative generative modeling workflows.
Problem

Research questions and friction points this paper is trying to address.

Investigates idempotence in neural audio codecs
Improves codec stability through fine-tuning
Examines idempotence impact on generative modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning neural codecs for idempotence
Investigating causes of low idempotence
Maintaining generative performance with idempotence
🔎 Similar Papers
No similar papers found.