🤖 AI Summary
To address the challenges of modeling inter-subband feature disparities and preserving low-energy high-frequency components in speech enhancement using state space models (SSMs), this paper proposes CSMamba—a cross-band and subband-cooperative framework. The method introduces three key innovations: (1) a bandwidth-adaptive subband splitting module that partitions the spectrum into four non-uniform frequency bands; (2) a spectral restoration module that leverages band similarity-driven weight assignment and multi-perspective cross-band fusion to enhance structured cross-band modeling; and (3) a lightweight SSM architecture built upon an improved Mamba backbone. Evaluated on DNS Challenge 2021, CSMamba achieves superior performance over multiple state-of-the-art methods in PESQ, STOI, and SI-SNR, while significantly reducing parameter count.
📝 Abstract
Recently, the state space model (SSM) represented by Mamba has shown remarkable performance in long-term sequence modeling tasks, including speech enhancement. However, due to substantial differences in sub-band features, applying the same SSM to all sub-bands limits its inference capability. Additionally, when processing each time frame of the time-frequency representation, the SSM may forget certain high-frequency information of low energy, making the restoration of structure in the high-frequency bands challenging. For this reason, we propose Cross- and Sub-band Mamba (CSMamba). To assist the SSM in handling different sub-band features flexibly, we propose a band split block that splits the full-band into four sub-bands with different widths based on their information similarity. We then allocate independent weights to each sub-band, thereby reducing the inference burden on the SSM. Furthermore, to mitigate the forgetting of low-energy information in the high-frequency bands by the SSM, we introduce a spectrum restoration block that enhances the representation of the cross-band features from multiple perspectives. Experimental results on the DNS Challenge 2021 dataset demonstrate that CSMamba outperforms several state-of-the-art (SOTA) speech enhancement methods in three objective evaluation metrics with fewer parameters.