๐ค AI Summary
Unsupervised continuous anomaly detection (UCAD) faces challenges of feature redundancy and catastrophic forgetting in multi-task representation learning. Method: We propose a key-prompt-driven cross-modal knowledge distillation mechanism and a structure-aware refined contrastive learning framework. Unlike supervised approaches relying on prior labels, our method constructs a multimodal task representation memory bank by integrating BERT, ViT, Grounding DINO, and SAM; employs prompt-guided complementary cross-modal interaction; and introduces graph-structured fine-grained contrastive constraints to enable stable incremental representation updates. Contribution/Results: On MVTec AD and VisA benchmarks, our approach achieves a mean detection accuracy of 0.921 and significantly lower forgetting rates than state-of-the-art methodsโmarking the first work to realize structure-aware, low-forgetting, and highly discriminative joint optimization of multi-task representations under unsupervised continual learning.
๐ Abstract
Unsupervised Continuous Anomaly Detection (UCAD) faces significant challenges in multi-task representation learning, with existing methods suffering from incomplete representation and catastrophic forgetting. Unlike supervised models, unsupervised scenarios lack prior information, making it difficult to effectively distinguish redundant and complementary multimodal features. To address this, we propose the Multimodal Task Representation Memory Bank (MTRMB) method through two key technical innovations: A Key-Prompt-Multimodal Knowledge (KPMK) mechanism that uses concise key prompts to guide cross-modal feature interaction between BERT and ViT. Refined Structure-based Contrastive Learning (RSCL) leveraging Grounding DINO and SAM to generate precise segmentation masks, pulling features of the same structural region closer while pushing different structural regions apart. Experiments on MVtec AD and VisA datasets demonstrate MTRMB's superiority, achieving an average detection accuracy of 0.921 at the lowest forgetting rate, significantly outperforming state-of-the-art methods. We plan to open source on GitHub.