RIFLE: Robust Distillation-based FL for Deep Model Deployment on Resource-Constrained IoT Networks

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of deploying traditional federated learning in resource-constrained IoT environments, where lightweight models struggle with highly non-IID data, complex tasks, and poisoning attacks. The authors propose a robust federated learning framework based on knowledge distillation, which innovatively integrates client-side logits uploading with a KL divergence–based verification mechanism. This approach enables reliable assessment of model updates without accessing raw data, thereby preserving privacy while supporting efficient collaborative training of deep architectures such as VGG-19 and ResNet18. Experimental results on MNIST, CIFAR-10, and CIFAR-100 demonstrate that the method achieves up to a 28.3% accuracy improvement within ten communication rounds, reduces false positives by 87.5%, enhances resilience against poisoning attacks by 62.5%, and slashes the per-round training time for VGG-19 from an estimated 600 days to just 1.39 hours.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a decentralized learning paradigm widely adopted in resource-constrained Internet of Things (IoT) environments. These devices, typically relying on TinyML models, collaboratively train global models by sharing gradients with a central server while preserving data privacy. However, as data heterogeneity and task complexity increase, TinyML models often become insufficient to capture intricate patterns, especially under extreme non-IID (non-independent and identically distributed) conditions. Moreover, ensuring robustness against malicious clients and poisoned updates remains a major challenge. Accordingly, this paper introduces RIFLE - a Robust, distillation-based Federated Learning framework that replaces gradient sharing with logit-based knowledge transfer. By leveraging a knowledge distillation aggregation scheme, RIFLE enables the training of deep models such as VGG-19 and Resnet18 within constrained IoT systems. Furthermore, a Kullback-Leibler (KL) divergence-based validation mechanism quantifies the reliability of client updates without exposing raw data, achieving high trust and privacy preservation simultaneously. Experiments on three benchmark datasets (MNIST, CIFAR-10, and CIFAR-100) under heterogeneous non-IID conditions demonstrate that RIFLE reduces false-positive detections by up to 87.5%, enhances poisoning attack mitigation by 62.5%, and achieves up to 28.3% higher accuracy compared to conventional federated learning baselines within only 10 rounds. Notably, RIFLE reduces VGG19 training time from over 600 days to just 1.39 hours on typical IoT devices (0.3 GFLOPS), making deep learning practical in resource-constrained networks.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Resource-Constrained IoT
Non-IID Data
Model Robustness
Poisoning Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Knowledge Distillation
Robust Aggregation
KL Divergence
Resource-Constrained IoT
🔎 Similar Papers
No similar papers found.