🤖 AI Summary
Medical imaging distributions dynamically evolve over time, causing significant performance degradation in visual anomaly detection (VAD) models—a challenge unaddressed within the continual learning (CL) paradigm. This work bridges VAD and CL for healthcare applications by proposing PatchCoreCL, a continual anomaly detection framework built upon PatchCore. It introduces an incremental update mechanism integrating feature replay and adaptive threshold adjustment to jointly preserve prior knowledge and detect emerging anomalies. Evaluated on the BMAD dataset at both image- and pixel-level, PatchCoreCL achieves less than 1% forgetting rate while attaining performance comparable to task-specific models. This study establishes the first scalable, low-forgetting CL paradigm for robust VAD under dynamic medical data distributions.
📝 Abstract
Visual Anomaly Detection (VAD) seeks to identify abnormal images and precisely localize the corresponding anomalous regions, relying solely on normal data during training. This approach has proven essential in domains such as manufacturing and, more recently, in the medical field, where accurate and explainable detection is critical. Despite its importance, the impact of evolving input data distributions over time has received limited attention, even though such changes can significantly degrade model performance. In particular, given the dynamic and evolving nature of medical imaging data, Continual Learning (CL) provides a natural and effective framework to incrementally adapt models while preserving previously acquired knowledge. This study explores for the first time the application of VAD models in a CL scenario for the medical field. In this work, we utilize a CL version of the well-established PatchCore model, called PatchCoreCL, and evaluate its performance using BMAD, a real-world medical imaging dataset with both image-level and pixel-level annotations. Our results demonstrate that PatchCoreCL is an effective solution, achieving performance comparable to the task-specific models, with a forgetting value less than a 1%, highlighting the feasibility and potential of CL for adaptive VAD in medical imaging.