Efficient Private Inference Based on Helper-Assisted Malicious Security Dishonest Majority MPC

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency and weak security of maliciously secure multi-party computation (MPC) for private inference in machine learning as a service (MLaaS) under a dishonest-majority threat model, this paper proposes the first assistant-aided maliciously secure MPC framework supporting a dishonest majority. Our approach introduces five fixed-round protocols and a co-optimization strategy, integrating sixth-order polynomial approximation for nonlinear activation functions and tunable-parameter batch normalization to effectively suppress activation value leakage while preserving high-fidelity function fitting. Evaluated on LeNet and AlexNet, our framework achieves 2.4–25.7× speedup over baselines under LAN and 1.3–9.5× under WAN, with relative errors as low as 0.04%–1.08%. To the best of our knowledge, this is the first work to simultaneously achieve high efficiency and high accuracy in neural network inference under strong malicious-security guarantees.

Technology Category

Application Category

📝 Abstract
Private inference based on Secure Multi-Party Computation (MPC) addresses data privacy risks in Machine Learning as a Service (MLaaS). However, existing MPC-based private inference frameworks focuses on semi-honest or honest majority models, whose threat models are overly idealistic, while malicious security dishonest majority models face the challenge of low efficiency. To balance security and efficiency, we propose a private inference framework using Helper-Assisted Malicious Security Dishonest Majority Model (HA-MSDM). This framework includes our designed five MPC protocols and a co-optimized strategy. These protocols achieve efficient fixed-round multiplication, exponentiation, and polynomial operations, providing foundational primitives for private inference. The co-optimized strategy balances inference efficiency and accuracy. To enhance efficiency, we employ polynomial approximation for nonlinear layers. For improved accuracy, we construct sixth-order polynomial approximation within a fixed interval to achieve high-precision activation function fitting and introduce parameter-adjusted batch normalization layers to constrain the activation escape problem. Benchmark results on LeNet and AlexNet show our framework achieves 2.4-25.7x speedup in LAN and 1.3-9.5x acceleration in WAN compared to state-of-the-art frameworks (IEEE S&P'25), maintaining high accuracy with only 0.04%-1.08% relative errors.
Problem

Research questions and friction points this paper is trying to address.

Balancing security and efficiency in private inference frameworks
Enhancing MPC protocols for malicious security dishonest majority models
Improving accuracy and speed in MLaaS private inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Helper-Assisted Malicious Security Dishonest Majority Model
Five efficient fixed-round MPC protocols
Polynomial approximation for nonlinear layers
🔎 Similar Papers
No similar papers found.
K
Kaiwen Wang
Y
Yuehan Dong
J
Junchao Fan
Xiaolin Chang
Xiaolin Chang
Beijing Jiaotong University
dependable and secure computing