Fence off Anomaly Interference: Cross-Domain Distillation for Fully Unsupervised Anomaly Detection

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In fully unsupervised anomaly detection (FUAD), conventional knowledge distillation suffers from implicit anomalies in training data, introducing abnormal bias into distilled representations. To address this, we propose a cross-domain reverse distillation framework. Our method partitions the input space into multiple domains and generates domain-specific pseudo-normal samples to construct domain-specialized teacher models. A cross-domain feature aggregation mechanism then guides a global student model to learn robust and generalizable normal representations, effectively blocking anomalous knowledge transfer. The key innovation lies in integrating the reverse distillation paradigm with domain-decoupled design, enabling reliable normal pattern modeling without any labels or anomaly priors. Extensive experiments on noisy benchmarks—MVTec AD and VisA—demonstrate significant improvements over state-of-the-art FUAD methods, validating both effectiveness and strong robustness against unlabeled anomalies.

Technology Category

Application Category

📝 Abstract
Fully Unsupervised Anomaly Detection (FUAD) is a practical extension of Unsupervised Anomaly Detection (UAD), aiming to detect anomalies without any labels even when the training set may contain anomalous samples. To achieve FUAD, we pioneer the introduction of Knowledge Distillation (KD) paradigm based on teacher-student framework into the FUAD setting. However, due to the presence of anomalies in the training data, traditional KD methods risk enabling the student to learn the teacher's representation of anomalies under FUAD setting, thereby resulting in poor anomaly detection performance. To address this issue, we propose a novel Cross-Domain Distillation (CDD) framework based on the widely studied reverse distillation (RD) paradigm. Specifically, we design a Domain-Specific Training, which divides the training set into multiple domains with lower anomaly ratios and train a domain-specific student for each. Cross-Domain Knowledge Aggregation is then performed, where pseudo-normal features generated by domain-specific students collaboratively guide a global student to learn generalized normal representations across all samples. Experimental results on noisy versions of the MVTec AD and VisA datasets demonstrate that our method achieves significant performance improvements over the baseline, validating its effectiveness under FUAD setting.
Problem

Research questions and friction points this paper is trying to address.

Detecting anomalies without any labels in training data
Preventing student model from learning anomalous representations
Improving cross-domain knowledge aggregation for generalized normal features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Domain Distillation framework
Domain-Specific Training strategy
Cross-Domain Knowledge Aggregation mechanism
🔎 Similar Papers
2024-05-29arXiv.orgCitations: 0