π€ AI Summary
In IoT environments, severe resource contention arises between deep learning (DL) training and software-defined networking (SDN) operations under stringent edge-resource constraints.
Method: This paper proposes an adaptive resampling method tailored for federated learning (FL), inspired by AdaBoost, which dynamically identifies misclassified samples, prioritizes their retention, and prunes redundant dataβthereby significantly reducing per-round training sample size while preserving model discriminative capability.
Contribution/Results: Evaluated on the CICIoT2023 dataset, the method achieves up to 72.6% reduction in training time with only a 1.62% accuracy drop, alongside substantial decreases in energy consumption and edge-device computational overhead. To our knowledge, this is the first work to embed a lightweight resampling mechanism into the FL framework to jointly optimize DL training efficiency and SDN real-time responsiveness, establishing a novel paradigm for efficient, low-latency coexistence in intelligent IoT systems.
π Abstract
With the rise of Software-Defined Networking (SDN) for managing traffic and ensuring seamless operations across interconnected devices, challenges arise when SDN controllers share infrastructure with deep learning (DL) workloads. Resource contention between DL training and SDN operations, especially in latency-sensitive IoT environments, can degrade SDN's responsiveness and compromise network performance. Federated Learning (FL) helps address some of these concerns by decentralizing DL training to edge devices, thus reducing data transmission costs and enhancing privacy. Yet, the computational demands of DL training can still interfere with SDN's performance, especially under the continuous data streams characteristic of IoT systems. To mitigate this issue, we propose REDUS (Resampling for Efficient Data Utilization in Smart-Networks), a resampling technique that optimizes DL training by prioritizing misclassified samples and excluding redundant data, inspired by AdaBoost. REDUS reduces the number of training samples per epoch, thereby conserving computational resources, reducing energy consumption, and accelerating convergence without significantly impacting accuracy. Applied within an FL setup, REDUS enhances the efficiency of model training on resource-limited edge devices while maintaining network performance. In this paper, REDUS is evaluated on the CICIoT2023 dataset for IoT attack detection, showing a training time reduction of up to 72.6% with a minimal accuracy loss of only 1.62%, offering a scalable and practical solution for intelligent networks.