Mitigating Long-Tailed Anomaly Score Distributions with Importance-Weighted Loss

πŸ“… 2025-06-30
πŸ›οΈ IEEE International Joint Conference on Neural Network
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the long-tailed distribution (LTD) of anomaly scores in industrial anomaly detection, which arises from the diversity of normal samples and leads to biased model training and degraded detection performance for underrepresented normal patterns. To mitigate this issue, the authors propose a novel importance-weighted loss function that requires no prior knowledge of normal sample categories. For the first time, they align the anomaly score distribution to a target Gaussian distribution via importance sampling under a fully unsupervised setting. This approach effectively alleviates the LTD problem and enhances the model’s ability to equitably represent diverse normal modes. Extensive experiments on three image benchmarks and three real-world hyperspectral datasets demonstrate an average improvement of 0.043 in detection performance, confirming the method’s effectiveness and generalizability.

Technology Category

Application Category

πŸ“ Abstract
Anomaly detection is crucial in industrial applications for identifying rare and unseen patterns to ensure system reliability. Traditional models, trained on a single class of normal data, struggle with real-world distributions where normal data exhibit diverse patterns, leading to class imbalance and long-tailed anomaly score distributions (LTD). This imbalance skews model training and degrades detection performance, especially for minority instances. To address this issue, we propose a novel importance-weighted loss designed specifically for anomaly detection. Compared to the previous method for LTD in classification, our method does not require prior knowledge of normal data classes. Instead, we introduce a weighted loss function that incorporates importance sampling to align the distribution of anomaly scores with a target Gaussian, ensuring a balanced representation of normal data. Extensive experiments on three benchmark image datasets and three real-world hyperspectral imaging datasets demonstrate the robustness of our approach in mitigating LTD-induced bias. Our method improves anomaly detection performance by 0.043, highlighting its effectiveness in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

anomaly detection
long-tailed distribution
class imbalance
anomaly score distribution
industrial applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

importance-weighted loss
long-tailed anomaly score distribution
anomaly detection
importance sampling
class imbalance
πŸ”Ž Similar Papers
No similar papers found.
Jungi Lee
Jungi Lee
Ph.D. Student of Seoul National University
Computer ArchitectureDeep LearningPerformance Modeling
J
Jungkwon Kim
R&D Division, ELROILAB Inc., Seoul, Republic of Korea
C
Chi Zhang
R&D Division, ELROILAB Inc., Seoul, Republic of Korea
S
Sangmin Kim
R&D Division, ELROILAB Inc., Seoul, Republic of Korea
K
Kwangsun Yoo
R&D Division, ELROILAB Inc., Seoul, Republic of Korea
S
Seok-Joo Byun
R&D Division, ELROILAB Inc., Seoul, Republic of Korea