π€ AI Summary
This work addresses the vulnerability of k-means-based resource allocation systems in fog computing to causal evasion attacks, wherein adversaries exploit reverse engineering to compromise model integrity and allocation stability. To counter this threat, the paper introduces adversarial training into fog computing resource scheduling for the first time, embedding an adversarial example augmentation mechanism during the online k-means clustering phase. This approach enables end-to-end defense against both exploratory and causal evasion attacks. Experimental results demonstrate that the proposed method significantly enhances the robustness of the clustering model under adversarial conditions while effectively preserving the stability of virtual machine resource allocation.
π Abstract
This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for resource provisioning in fog networks. The considered k-means algorithm runs two phases iteratively: offline clustering to form clusters of requested workload and online classification of new incoming requests into offline-created clusters. First, we consider an evasion attack against the classifier in the online phase. A threat actor launches an exploratory attack using query-based reverse engineering to discover the Machine Learning (ML) model (the clustering scheme). Then, a passive causative (evasion) attack is triggered in the offline phase. To defend the model, we suggest a proactive method using adversarial training to introduce attack robustness into the classifier. Our results show that our mitigation technique effectively maintains the stability of the resource provisioning system against attacks.