🤖 AI Summary
Existing multimodal continual learning approaches focus on coarse-grained tasks and struggle with fine-grained scenarios involving modality entanglement—particularly in audio-guided continual segmentation, where semantic drift (e.g., vocalizing objects misclassified as background) and co-occurrence confusion (mutual misclassification of frequently co-occurring classes) arise. This work introduces, for the first time, the **continual audio-visual segmentation task**, establishing a novel multimodal continual learning paradigm tailored to fine-grained class-incremental settings. We propose a **multimodal sample selection strategy** and a **collision-detection-based sample replay mechanism**, jointly optimizing audio-visual stream representations to mitigate modality entanglement. Evaluated under three audio-visual incremental learning protocols, our method significantly outperforms unimodal baselines across all metrics, demonstrating both effectiveness and robustness in preserving discriminative cross-modal semantics during continual learning.
📝 Abstract
Recently, significant progress has been made in multi-modal continual learning, aiming to learn new tasks sequentially in multi-modal settings while preserving performance on previously learned ones. However, existing methods mainly focus on coarse-grained tasks, with limitations in addressing modality entanglement in fine-grained continual learning settings. To bridge this gap, we introduce a novel Continual Audio-Visual Segmentation (CAVS) task, aiming to continuously segment new classes guided by audio. Through comprehensive analysis, two critical challenges are identified: 1) multi-modal semantic drift, where a sounding objects is labeled as background in sequential tasks; 2) co-occurrence confusion, where frequent co-occurring classes tend to be confused. In this work, a Collision-based Multi-modal Rehearsal (CMR) framework is designed to address these challenges. Specifically, for multi-modal semantic drift, a Multi-modal Sample Selection (MSS) strategy is proposed to select samples with high modal consistency for rehearsal. Meanwhile, for co-occurence confusion, a Collision-based Sample Rehearsal (CSR) mechanism is designed, allowing for the increase of rehearsal sample frequency of those confusable classes during training process. Moreover, we construct three audio-visual incremental scenarios to verify effectiveness of our method. Comprehensive experiments demonstrate that our method significantly outperforms single-modal continual learning methods.