🤖 AI Summary
To address the high communication overhead and the privacy-efficiency trade-off in cloud-edge collaborative intrusion detection—caused by frequent model uploads in conventional federated learning—this paper introduces, for the first time, a knowledge distillation–enhanced federated learning framework. It proposes a privacy-preserving, lightweight edge model distillation scheme coupled with a secure aggregation mechanism, enabling model compression without exposing raw data while preserving detection accuracy. Experiments on multiple IoT intrusion detection datasets demonstrate that our approach reduces communication volume by 62%, decreases inference latency by 47%, and incurs less than 0.8% accuracy degradation compared to state-of-the-art methods. The core contribution lies in establishing a resource-aware, knowledge distillation–augmented federated learning paradigm tailored for constrained edge environments, achieving joint optimization of communication efficiency, model accuracy, and data privacy.
📝 Abstract
The growth of the Internet of Things has amplified the need for secure data interactions in cloud-edge ecosystems, where sensitive information is constantly processed across various system layers. Intrusion detection systems are commonly used to protect such environments from malicious attacks. Recently, Federated Learning has emerged as an effective solution for implementing intrusion detection systems, owing to its decentralised architecture that avoids sharing raw data with a central server, thereby enhancing data privacy. Despite its benefits, Federated Learning faces criticism for high communication overhead from frequent model updates, especially in large-scale Cloud-Edge infrastructures. This paper explores Knowledge Distillation to reduce communication overhead in Cloud-Edge intrusion detection while preserving accuracy and data privacy. Experiments show significant improvements over state-of-the-art methods.