Multimodal Task Representation Memory Bank vs. Catastrophic Forgetting in Anomaly Detection

๐Ÿ“… 2025-02-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Unsupervised continuous anomaly detection (UCAD) faces challenges of feature redundancy and catastrophic forgetting in multi-task representation learning. Method: We propose a key-prompt-driven cross-modal knowledge distillation mechanism and a structure-aware refined contrastive learning framework. Unlike supervised approaches relying on prior labels, our method constructs a multimodal task representation memory bank by integrating BERT, ViT, Grounding DINO, and SAM; employs prompt-guided complementary cross-modal interaction; and introduces graph-structured fine-grained contrastive constraints to enable stable incremental representation updates. Contribution/Results: On MVTec AD and VisA benchmarks, our approach achieves a mean detection accuracy of 0.921 and significantly lower forgetting rates than state-of-the-art methodsโ€”marking the first work to realize structure-aware, low-forgetting, and highly discriminative joint optimization of multi-task representations under unsupervised continual learning.

Technology Category

Application Category

๐Ÿ“ Abstract
Unsupervised Continuous Anomaly Detection (UCAD) faces significant challenges in multi-task representation learning, with existing methods suffering from incomplete representation and catastrophic forgetting. Unlike supervised models, unsupervised scenarios lack prior information, making it difficult to effectively distinguish redundant and complementary multimodal features. To address this, we propose the Multimodal Task Representation Memory Bank (MTRMB) method through two key technical innovations: A Key-Prompt-Multimodal Knowledge (KPMK) mechanism that uses concise key prompts to guide cross-modal feature interaction between BERT and ViT. Refined Structure-based Contrastive Learning (RSCL) leveraging Grounding DINO and SAM to generate precise segmentation masks, pulling features of the same structural region closer while pushing different structural regions apart. Experiments on MVtec AD and VisA datasets demonstrate MTRMB's superiority, achieving an average detection accuracy of 0.921 at the lowest forgetting rate, significantly outperforming state-of-the-art methods. We plan to open source on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in anomaly detection
Enhances multimodal feature representation learning
Improves unsupervised continuous anomaly detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Task Representation Memory Bank
Key-Prompt-Multimodal Knowledge mechanism
Refined Structure-based Contrastive Learning
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
You Zhou
J
Jiangshan Zhao
Deyu Zeng
Deyu Zeng
Shenzhen University
machine learning
Zuo Zuo
Zuo Zuo
Xi'an Jiao Tong University
W
Weixiang Liu
Z
Zongze Wu