🤖 AI Summary
Hyperspectral anomaly detection faces persistent challenges including high computational complexity, noise sensitivity, and poor cross-dataset generalizability. To address these, this work systematically surveys and empirically evaluates four major methodological paradigms—statistical models, sparse representation, conventional machine learning, and deep learning—within a unified benchmarking framework across 17 standard hyperspectral datasets. We propose a multidimensional evaluation protocol jointly optimizing detection accuracy (AUC/ROC), computational efficiency, and generalization capability, and introduce separability graphs to quantify method robustness. Experimental results reveal that deep learning achieves the highest detection accuracy, while statistical methods excel in inference speed; all paradigms exhibit fundamental trade-offs among accuracy, efficiency, and generalizability. This study establishes the first comprehensive, metric-rich, and fully reproducible benchmark for hyperspectral anomaly detection—providing empirical guidance for algorithm selection and principled design of next-generation methods.
📝 Abstract
Hyperspectral images are high-dimensional datasets consisting of hundreds of contiguous spectral bands, enabling detailed material and surface analysis. Hyperspectral anomaly detection (HAD) refers to the technique of identifying and locating anomalous targets in such data without prior information about a hyperspectral scene or target spectrum. This technology has seen rapid advancements in recent years, with applications in agriculture, defence, military surveillance, and environmental monitoring. Despite this significant progress, existing HAD methods continue to face challenges such as high computational complexity, sensitivity to noise, and limited generalisation across diverse datasets. This study presents a comprehensive comparison of various HAD techniques, categorising them into statistical models, representation-based methods, classical machine learning approaches, and deep learning models. We evaluated these methods across 17 benchmarking datasets using different performance metrics, such as ROC, AUC, and separability map to analyse detection accuracy, computational efficiency, their strengths, limitations, and directions for future research.The research shows that deep learning models achieved the highest detection accuracy, while statistical models demonstrated exceptional speed across all datasets. This study aims to provide valuable insights for researchers and practitioners working to advance the field of hyperspectral anomaly detection methods.