Componential Prompt-Knowledge Alignment for Domain Incremental Learning

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In domain-incremental learning (DIL), semantic misalignment among domain-specific prompt components induces knowledge conflicts and performance degradation. To address this, we propose a component-level prompt–knowledge alignment mechanism comprising two stages: initial structured prompt configuration and online dynamic alignment. Our method employs greedy search–driven knowledge prompt mining, component-aware adaptive consistency constraints, and dynamic prompt evolution modeling to achieve intrinsic semantic alignment across domain prompts. Evaluated on multiple DIL benchmarks, our approach significantly outperforms existing prompt-based methods, effectively mitigating catastrophic forgetting while enhancing cross-domain generalization and knowledge reuse. Notably, it pioneers the refinement of prompt alignment from the holistic parameter level to an interpretable, semantically grounded component level—enabling fine-grained, principled alignment of prompt substructures across domains.

Technology Category

Application Category

📝 Abstract
Domain Incremental Learning (DIL) aims to learn from non-stationary data streams across domains while retaining and utilizing past knowledge. Although prompt-based methods effectively store multi-domain knowledge in prompt parameters and obtain advanced performance through cross-domain prompt fusion, we reveal an intrinsic limitation: component-wise misalignment between domain-specific prompts leads to conflicting knowledge integration and degraded predictions. This arises from the random positioning of knowledge components within prompts, where irrelevant component fusion introduces interference.To address this, we propose Componential Prompt-Knowledge Alignment (KA-Prompt), a novel prompt-based DIL method that introduces component-aware prompt-knowledge alignment during training, significantly improving both the learning and inference capacity of the model. KA-Prompt operates in two phases: (1) Initial Componential Structure Configuring, where a set of old prompts containing knowledge relevant to the new domain are mined via greedy search, which is then exploited to initialize new prompts to achieve reusable knowledge transfer and establish intrinsic alignment between new and old prompts. (2) Online Alignment Preservation, which dynamically identifies the target old prompts and applies adaptive componential consistency constraints as new prompts evolve. Extensive experiments on DIL benchmarks demonstrate the effectiveness of our KA-Prompt. Our source code is available at https://github.com/zhoujiahuan1991/ICML2025-KA-Prompt
Problem

Research questions and friction points this paper is trying to address.

Aligns domain-specific prompts to prevent conflicting knowledge integration
Addresses random positioning of knowledge components in prompts
Improves learning and inference in domain incremental learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Component-aware prompt-knowledge alignment for DIL
Initial componential structure configuring via greedy search
Online alignment preservation with adaptive consistency constraints
🔎 Similar Papers
No similar papers found.