Continual Learning for Multiple Modalities

πŸ“… 2025-03-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing continual learning methods are predominantly unimodal and thus struggle with catastrophic forgetting when confronted with multimodal task streamsβ€”such as images, videos, audio, depth, and text. This paper introduces the first unified framework for multimodal continual learning. Our method features a cross-modal knowledge aggregation mechanism that jointly leverages intra-modal self-supervised regularization and inter-modal contribution-aware alignment. To mitigate alignment inaccuracies induced by modality bias, we propose a modality-embedding recalibration strategy. Crucially, our approach operates without explicit modality identifiers, enabling modality-agnostic dynamic realignment of the embedding space throughout training. Evaluated on a comprehensive multimodal continual learning benchmark, our framework substantially outperforms existing state-of-the-art methods, demonstrating both strong generalization across diverse modalities and practical deployability.

Technology Category

Application Category

πŸ“ Abstract
Continual learning aims to learn knowledge of tasks observed in sequential time steps while mitigating the forgetting of previously learned knowledge. Existing methods were proposed under the assumption of learning a single modality (e.g., image) over time, which limits their applicability in scenarios involving multiple modalities. In this work, we propose a novel continual learning framework that accommodates multiple modalities (image, video, audio, depth, and text). We train a model to align various modalities with text, leveraging its rich semantic information. However, this increases the risk of forgetting previously learned knowledge, exacerbated by the differing input traits of each task. To alleviate the overwriting of the previous knowledge of modalities, we propose a method for aggregating knowledge within and across modalities. The aggregated knowledge is obtained by assimilating new information through self-regularization within each modality and associating knowledge between modalities by prioritizing contributions from relevant modalities. Furthermore, we propose a strategy that re-aligns the embeddings of modalities to resolve biased alignment between modalities. We evaluate the proposed method in a wide range of continual learning scenarios using multiple datasets with different modalities. Extensive experiments demonstrate that ours outperforms existing methods in the scenarios, regardless of whether the identity of the modality is given.
Problem

Research questions and friction points this paper is trying to address.

Addresses forgetting in continual learning with multiple modalities.
Proposes a framework for aligning and aggregating knowledge across modalities.
Introduces a strategy to re-align embeddings to reduce biased alignment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel continual learning framework for multiple modalities
Knowledge aggregation within and across modalities
Re-aligning embeddings to resolve biased modality alignment
πŸ”Ž Similar Papers
No similar papers found.