EIAD: Explainable Industrial Anomaly Detection Via Multi-Modal Large Language Models

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Industrial Anomaly Detection (IAD) faces two key challenges: (1) zero-shot defect segmentation lacks interpretability, and (2) multimodal large models suffer from overfitting when jointly performing question answering (QA) and pixel-level localization. To address these, we propose a decoupled multimodal defect localization framework featuring independently optimized dual pathways—dialogue and visual feature streams—augmented by mask-guided vision-language alignment training. We further introduce DDQA, the first real-world industrial defect detection QA dataset, encompassing diverse defect types and operational scenarios. Our method achieves high localization accuracy while enabling, for the first time, zero-shot generation of fine-grained defect descriptions. It significantly improves both QA accuracy and mask IoU, demonstrating superior performance on public benchmarks and factory-line deployment. Empirical results confirm its high accuracy, strong interpretability, and practical deployability.

Technology Category

Application Category

📝 Abstract
Industrial Anomaly Detection (IAD) is critical to ensure product quality during manufacturing. Although existing zero-shot defect segmentation and detection methods have shown effectiveness, they cannot provide detailed descriptions of the defects. Furthermore, the application of large multi-modal models in IAD remains in its infancy, facing challenges in balancing question-answering (QA) performance and mask-based grounding capabilities, often owing to overfitting during the fine-tuning process. To address these challenges, we propose a novel approach that introduces a dedicated multi-modal defect localization module to decouple the dialog functionality from the core feature extraction. This decoupling is achieved through independent optimization objectives and tailored learning strategies. Additionally, we contribute to the first multi-modal industrial anomaly detection training dataset, named Defect Detection Question Answering (DDQA), encompassing a wide range of defect types and industrial scenarios. Unlike conventional datasets that rely on GPT-generated data, DDQA ensures authenticity and reliability and offers a robust foundation for model training. Experimental results demonstrate that our proposed method, Explainable Industrial Anomaly Detection Assistant (EIAD), achieves outstanding performance in defect detection and localization tasks. It not only significantly enhances accuracy but also improves interpretability. These advancements highlight the potential of EIAD for practical applications in industrial settings.
Problem

Research questions and friction points this paper is trying to address.

Enhances defect detection accuracy and interpretability in industrial settings.
Addresses overfitting in multi-modal models for anomaly detection.
Introduces a novel dataset for reliable industrial anomaly detection training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal defect localization module introduced
Independent optimization objectives for decoupling
First multi-modal industrial anomaly detection dataset
🔎 Similar Papers
No similar papers found.