A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks

📅 2023-08-10
🏛️ Inf.
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between sensitive data privacy leakage and computational overhead in cloud-based AI inference, this paper proposes the first homomorphic encryption (HE) framework tailored for spiking neural networks (SNNs), built upon the BFV scheme. It enables encrypted inference of both SNNs and conventional deep neural networks (DNNs) with LeNet-5 and AlexNet architectures on FashionMNIST. Crucially, it presents the first systematic HE-based comparison between SNNs and DNNs, revealing that SNNs significantly outperform DNNs under low plaintext modulus: achieving up to a 40% improvement in encrypted inference accuracy while reducing modular reduction overhead and latency. This finding breaks the traditional accuracy–efficiency bottleneck of DNNs in ciphertext-domain computation, demonstrating that SNNs offer superior robustness, lower resource requirements, and stronger practicality for privacy-preserving AI. The work establishes a novel paradigm for deploying lightweight, secure neural networks in real-world encrypted environments.
📝 Abstract
Machine learning (ML) is widely used today, especially through deep neural networks (DNNs); however, increasing computational load and resource requirements have led to cloud-based solutions. To address this problem, a new generation of networks has emerged called spiking neural networks (SNNs), which mimic the behavior of the human brain to improve efficiency and reduce energy consumption. These networks often process large amounts of sensitive information, such as confidential data, and thus privacy issues arise. Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting them. This research compares traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme. The LeNet-5 and AlexNet models, widely-used convolutional architectures, are used for both DNN and SNN models based on their respective architectures, and the networks are trained and compared using the FashionMNIST dataset. The results show that SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the plaintext modulus t, although their execution time is longer due to their time-coding nature with multiple time steps.
Problem

Research questions and friction points this paper is trying to address.

Privacy-preserving computation for sensitive data in neural networks
Efficiency comparison between DNNs and SNNs using homomorphic encryption
Accuracy and performance trade-offs in encrypted Spiking Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Homomorphic encryption for Spiking Neural Networks
BFV scheme comparing DNNs and SNNs
LeNet-5 model with FashionMNIST dataset