DesCLIP: Robust Continual Adaptation via General Attribute Descriptions for Pretrained Vision-Language Models

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting and the difficulty of jointly preserving general and specialized knowledge in vision-language models (VLMs) under continual learning, this paper proposes a General Attribute (GA)-based ternary association mechanism—replacing conventional binary visual–class matching with visual–GA–class alignment. Methodologically, we introduce a language assistant to generate candidate GA descriptions and design an anchor-embedding filter to select highly relevant GAs. We further integrate cross-modal instance matching, progressive fine-tuning of the visual encoder, and collaborative alignment of text embeddings to jointly optimize multimodal representations. Evaluated on multiple continual learning benchmarks, our approach significantly outperforms existing VLM-based methods, achieving substantial gains in average accuracy while effectively mitigating forgetting and enhancing generalization stability.

Technology Category

Application Category

📝 Abstract
Continual adaptation of vision-language models (VLMs) focuses on leveraging cross-modal pretrained knowledge to incrementally adapt for expanding downstream tasks and datasets, while tackling the challenge of knowledge forgetting. Existing research often focuses on connecting visual features with specific class text in downstream tasks, overlooking the latent relationships between general and specialized knowledge. Our findings reveal that forcing models to optimize inappropriate visual-text matches exacerbates forgetting of VLMs. To tackle this issue, we propose DesCLIP, which leverages general attribute (GA) descriptions to guide the understanding of specific class objects, enabling VLMs to establish robust extit{vision-GA-class} trilateral associations rather than relying solely on extit{vision-class} connections. Specifically, we introduce a language assistant to generate concrete GA description candidates via proper request prompts. Then, an anchor-based embedding filter is designed to obtain highly relevant GA description embeddings, which are leveraged as the paired text embeddings for visual-textual instance matching, thereby tuning the visual encoder. Correspondingly, the class text embeddings are gradually calibrated to align with these shared GA description embeddings. Extensive experiments demonstrate the advancements and efficacy of our proposed method, with comprehensive empirical evaluations highlighting its superior performance compared to existing pretrained and VLM-based continual learning methods.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Visual Language Models
Knowledge Retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

DesCLIP
Continual Learning
Visual Language Models
🔎 Similar Papers
No similar papers found.
Chiyuan He
Chiyuan He
Ph.D student. University of Electronic Science and Technology of China
deep learningcontinual learningactivity recognitionvision-language model
Zihuan Qiu
Zihuan Qiu
PhD student, University of Electronic Science and Technology of China
Continual learningDeep LearningComputer Vision
F
Fanman Meng
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
L
Linfeng Xu
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Qingbo Wu
Qingbo Wu
University of Electronic Science and Technology of China
video codingimage and video quality assessment
H
Hongliang Li
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China