🤖 AI Summary
Time-series anomaly detection (TSAD) requires real-time identification of observations that significantly deviate from normal patterns in streaming data. This paper proposes a lightweight Transformer-based method that leverages patch-wise reconstruction error for anomaly scoring. Input time series are first partitioned into overlapping local patches; a Transformer encoder captures global temporal dependencies, and each patch is then reconstructed in a self-supervised manner. Anomalies are explicitly quantified via per-patch reconstruction errors, which are subsequently aggregated and thresholded to yield final detections. To our knowledge, this is the first Transformer-based TSAD framework that directly employs fine-grained, patch-level reconstruction error as the primary anomaly indicator—thereby achieving both high local sensitivity and robust global modeling. The method achieves state-of-the-art F1-scores across multiple standard benchmarks and operates 3.2× faster than comparable Transformer-based approaches, enabling efficient, real-time streaming inference.
📝 Abstract
Time series anomaly detection (TSAD) focuses on identifying whether observations in streaming data deviate significantly from normal patterns. With the prevalence of connected devices, anomaly detection on time series has become paramount, as it enables real-time monitoring and early detection of irregular behaviors across various application domains. In this work, we introduce PatchTrAD, a Patch-based Transformer model for time series anomaly detection. Our approach leverages a Transformer encoder along with the use of patches under a reconstructionbased framework for anomaly detection. Empirical evaluations on multiple benchmark datasets show that PatchTrAD is on par, in terms of detection performance, with state-of-the-art deep learning models for anomaly detection while being time efficient during inference.