A Quantized VAE-MLP Botnet Detection Model: A Systematic Evaluation of Quantization-Aware Training and Post-Training Quantization Strategies

๐Ÿ“… 2025-11-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of deploying deep learningโ€“based intrusion detection models on resource-constrained IoT devices, this paper proposes a lightweight botnet detection framework comprising a quantized variational autoencoder (VAE) for extracting low-dimensional, robust features, followed by a lightweight multilayer perceptron (MLP) classifier. Our key innovation lies in the synergistic integration of VAE-based feature compression and model quantization, with a systematic comparative analysis of quantization-aware training (QAT) versus post-training quantization (PTQ) in edge-security scenarios. Evaluations on the N-BaIoT and CICIoT2022 datasets show that PTQ incurs <0.5% accuracy degradation while achieving 6ร— inference acceleration and 21ร— model size reduction; QAT attains 24ร— compression and 3ร— speedup. Results demonstrate that PTQ better suits ultra-resource-constrained IoT edge environments, providing a practical, deployable pathway for lightweight network threat detection.

Technology Category

Application Category

๐Ÿ“ Abstract
In an effort to counter the increasing IoT botnet-based attacks, state-of-the-art deep learning methods have been proposed and have achieved impressive detection accuracy. However, their computational intensity restricts deployment on resource-constrained IoT devices, creating a critical need for lightweight detection models. A common solution to this challenge is model compression via quantization. This study proposes a VAE-MLP model framework where an MLP-based classifier is trained on 8-dimensional latent vectors derived from the high-dimensional train data using the encoder component of a pretrained variational autoencoder (VAE). Two widely used quantization strategies--Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ)--are then systematically evaluated in terms of their impact on detection performance, storage efficiency, and inference latency using two benchmark IoT botnet datasets--N-BaIoT and CICIoT2022. The results revealed that, with respect to detection accuracy, the QAT strategy experienced a more noticeable decline,whereas PTQ incurred only a marginal reduction compared to the original unquantized model. Furthermore, PTQ yielded a 6x speedup and 21x reduction in size, while QAT achieved a 3x speedup and 24x compression, demonstrating the practicality of quantization for device-level IoT botnet detection.
Problem

Research questions and friction points this paper is trying to address.

Developing lightweight botnet detection models for resource-constrained IoT devices
Systematically evaluating quantization strategies to balance performance and efficiency
Comparing Quantization-Aware Training and Post-Training Quantization on detection metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

VAE-MLP model uses compressed latent vectors for classification
Systematic comparison of quantization-aware training and post-training quantization
Post-training quantization achieves 21x size reduction with minimal accuracy loss
๐Ÿ”Ž Similar Papers
No similar papers found.