Comparison of Fully Homomorphic Encryption and Garbled Circuit Techniques in Privacy-Preserving Machine Learning Inference

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically compares fully homomorphic encryption (FHE) and garbled circuits (GC) for privacy-preserving machine learning inference, focusing on secure neural network evaluation under joint data and model confidentiality. Under a unified threat model, we conduct the first quantitative, five-dimensional comparison—covering inference error, end-to-end latency, memory footprint, communication rounds, and bandwidth—using CKKS (Microsoft SEAL) for FHE and TinyGarble2.0 (Intel Labs) for GC, with a two-layer neural network as the benchmark. Results show that GC achieves lower latency and memory overhead, whereas FHE enables single-round, non-interactive inference, offering superior communication efficiency and stronger privacy guarantees. The study uncovers a fundamental trade-off among interactivity, computational efficiency, and security strength, providing empirical guidance for scenario-driven selection of privacy-enhancing technologies in practical deployments.

Technology Category

Application Category

📝 Abstract
Machine Learning (ML) is making its way into fields such as healthcare, finance, and Natural Language Processing (NLP), and concerns over data privacy and model confidentiality continue to grow. Privacy-preserving Machine Learning (PPML) addresses this challenge by enabling inference on private data without revealing sensitive inputs or proprietary models. Leveraging Secure Computation techniques from Cryptography, two widely studied approaches in this domain are Fully Homomorphic Encryption (FHE) and Garbled Circuits (GC). This work presents a comparative evaluation of FHE and GC for secure neural network inference. A two-layer neural network (NN) was implemented using the CKKS scheme from the Microsoft SEAL library (FHE) and the TinyGarble2.0 framework (GC) by IntelLabs. Both implementations are evaluated under the semi-honest threat model, measuring inference output error, round-trip time, peak memory usage, communication overhead, and communication rounds. Results reveal a trade-off: modular GC offers faster execution and lower memory consumption, while FHE supports non-interactive inference.
Problem

Research questions and friction points this paper is trying to address.

Comparing FHE and GC for secure neural network inference
Evaluating performance trade-offs in privacy-preserving machine learning
Assessing computational efficiency under semi-honest threat model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used Fully Homomorphic Encryption for secure computation
Applied Garbled Circuits for privacy-preserving machine learning
Implemented neural networks using CKKS and TinyGarble2 frameworks
🔎 Similar Papers
No similar papers found.