๐ค AI Summary
To address catastrophic forgetting and insufficient multimodal synergy in class-incremental learning (CIL) for vision-language models (VLMs), this paper proposes PROOF: a framework that freezes pretrained image and text encoders to preserve general-purpose representations, dynamically expands task-specific linear projection heads, and introduces a cross-modal fusion module for joint alignment and adaptive integration of visual and textual features. By decoupling representation learning from task adaptation, PROOF avoids parameter redundancy and interference, effectively balancing stability and plasticity. Evaluated on nine standard CIL benchmarks, PROOF consistently outperforms state-of-the-art methodsโreducing average forgetting by 32.7%, improving incremental-stage accuracy, and enhancing cross-task generalization. The framework establishes an efficient, scalable paradigm for multimodal continual learning.
๐ Abstract
Class-Incremental Learning (CIL) or continual learning is a desired capability in the real world, which requires a learning system to adapt to new tasks without forgetting former ones. While traditional CIL methods focus on visual information to grasp core features, recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations with the aid of textual information. However, when continually trained with new classes, VLMs often suffer from catastrophic forgetting of former knowledge. Applying VLMs to CIL poses two major challenges: 1) how to adapt the model without forgetting; and 2) how to make full use of the multi-modal information. To this end, we propose PROjectiOn Fusion (PROOF) that enables VLMs to learn without forgetting. To handle the first challenge, we propose training task-specific projections based on the frozen image/text encoders. When facing new tasks, new projections are expanded and former projections are fixed, alleviating the forgetting of old concepts. For the second challenge, we propose the fusion module to better utilize the cross-modality information. By jointly adjusting visual and textual features, the model can capture semantic information with stronger representation ability. Extensive experiments on nine benchmark datasets validate PROOF achieves state-of-the-art performance. Code is available at https://github.com/zhoudw-zdw/PROOF