🤖 AI Summary
To address the high computational cost of deep models in time-series classification and the difficulty of simultaneously achieving robustness against adversarial attacks and inference efficiency, this paper proposes a unified elastic framework. The framework integrates adversarial sample detection, attack-type identification, and dataset-similarity-based cross-domain model reuse—eliminating the need for redundant retraining. Innovatively, it couples adversarial awareness with adaptive model-library selection: a lightweight detection module identifies anomalous inputs; a dedicated classifier determines the attack type; and an optimal pre-trained model is retrieved and deployed based on feature-space similarity. Experiments demonstrate that the method reduces average inference computational overhead by 77.68%, while maintaining classification accuracy within 2.02% of the oracle upper bound. It exhibits strong generalization across diverse datasets and holds practical value for real-world deployment.
📝 Abstract
Minimizing computational overhead in time-series classification, particularly in deep learning models, presents a significant challenge due to the high complexity of model architectures and the large volume of sequential data that must be processed in real time. This challenge is further compounded by adversarial attacks, emphasizing the need for resilient methods that ensure robust performance and efficient model selection. To address this challenge, we propose ReLATE+, a comprehensive framework that detects and classifies adversarial attacks, adaptively selects deep learning models based on dataset-level similarity, and thus substantially reduces retraining costs relative to conventional methods that do not leverage prior knowledge, while maintaining strong performance. ReLATE+ first checks whether the incoming data is adversarial and, if so, classifies the attack type, using this insight to identify a similar dataset from a repository and enable the reuse of the best-performing associated model. This approach ensures strong performance while reducing the need for retraining, and it generalizes well across different domains with varying data distributions and feature spaces. Experiments show that ReLATE+ reduces computational overhead by an average of 77.68%, enhancing adversarial resilience and streamlining robust model selection, all without sacrificing performance, within 2.02% of Oracle.