FOBNN: Fast Oblivious Binarized Neural Network Inference

📅 2024-05-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address input and output privacy leakage in Deep Learning-as-a-Service (DLaaS), this paper proposes an efficient and secure oblivious neural network inference scheme. The method integrates binary neural networks, secure multi-party computation (MPC), and oblivious transfer (OT) to enable privacy-preserving inference under malicious adversaries. Its core innovations are two novel algorithms—Bit Length Bounding (BLB) and Layer-wise Bit Accumulation (LBA)—which for the first time deeply incorporate pure bit-level operations into binary convolutional inference. We provide formal security proofs in the malicious adversary model. Evaluated on RNA functional prediction, our scheme achieves accuracy comparable to or exceeding the original full-precision model, while reducing both computational and communication overhead by up to 2× compared to state-of-the-art approaches. The solution thus delivers strong end-to-end privacy guarantees without compromising practical efficiency.

Technology Category

Application Category

📝 Abstract
The superior performance of deep learning has propelled the rise of Deep Learning as a Service, enabling users to transmit their private data to service providers for model execution and inference retrieval. Nevertheless, the primary concern remains safeguarding the confidentiality of sensitive user data while optimizing the efficiency of secure protocols. To address this, we develop a fast oblivious binarized neural network inference framework, FOBNN. Specifically, we customize binarized convolutional neural networks to enhance oblivious inference, design two fast algorithms for binarized convolutions, and optimize network structures experimentally under constrained costs. Initially, we meticulously analyze the range of intermediate values in binarized convolutions to minimize bit representation, resulting in the Bit Length Bounding (BLB) algorithm. Subsequently, leveraging the efficiency of bitwise operations in BLB, we further enhance performance by employing pure bitwise operations for each binary digit position, yielding the Layer-wise Bit Accumulation (LBA) algorithm. Theoretical analysis validates FOBNN's security and indicates up to $2 imes$ improvement in computational and communication costs compared to the state-of-the-art method. We demonstrates our framework's effectiveness in RNA function prediction within bioinformatics. Rigorous experimental assessments confirm that our oblivious inference solutions not only maintain but often exceed the original accuracy, surpassing prior efforts.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in deep learning via oblivious inference
Optimize binarized neural networks for faster secure computation
Reduce communication costs in secure multiparty computation techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binarized Neural Networks for oblivious inference
Bit Length Bounding to reduce computations
Layer-wise Bit Accumulation boosts performance
🔎 Similar Papers
No similar papers found.
X
Xin Chen
East China Normal University, Shanghai, China
Z
Zhili Chen
East China Normal University, Shanghai, China
B
Benchang Dong
East China Normal University, Shanghai, China
S
Shiwen Wei
East China Normal University, Shanghai, China
L
Lin Chen
Sun Yat-sen University, Guangzhou, China
Daojing He
Daojing He
School of Computer Science and Engineering, South China University of Technology
Network and Information security