🤖 AI Summary
This paper addresses the challenges of evaluating robustness and ensuring reproducibility in Real-Time Localization and Tracking Systems (RTLS) under radar spoofing attacks. To this end, we propose the first modular and reproducible benchmarking framework. Methodologically, we design a decoupled dual-stream architecture—comprising a clean stream and a spoofing detection stream—to model three canonical radar spoofing types: drift, ghost, and mirror attacks. The framework integrates JPDA and GNN trackers, introduces a realistic offset metric to quantify assignment errors, and enables attack interpretability via trajectory offset visualization, clustering overlay, and spoofing injection timelines. Our contributions include an open-source benchmark framework, standardized evaluation protocols, and automated analytical tools—collectively enhancing transparency, comparability, and community verifiability in anti-spoofing tracking research.
📝 Abstract
SpoofTrackBench is a reproducible, modular benchmark for evaluating adversarial robustness in real-time localization and tracking (RTLS) systems under radar spoofing. Leveraging the Hampton University Skyler Radar Sensor dataset, we simulate drift, ghost, and mirror-type spoofing attacks and evaluate tracker performance using both Joint Probabilistic Data Association (JPDA) and Global Nearest Neighbor (GNN) architectures. Our framework separates clean and spoofed detection streams, visualizes spoof-induced trajectory divergence, and quantifies assignment errors via direct drift-from-truth metrics. Clustering overlays, injection-aware timelines, and scenario-adaptive visualizations enable interpretability across spoof types and configurations. Evaluation figures and logs are auto-exported for reproducible comparison. SpoofTrackBench sets a new standard for open, ethical benchmarking of spoof-aware tracking pipelines, enabling rigorous cross-architecture analysis and community validation.