CrypTorch: PyTorch-based Auto-tuning Compiler for Machine Learning with Multi-party Computation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MPC-based ML inference suffers from performance bottlenecks and accuracy degradation due to hand-crafted, inefficient, or imprecise approximations of non-linear operators (e.g., Softmax, GELU). Method: We propose the first PyTorch 2–based auto-tuning compiler specifically designed for MPC. Its core innovation is decoupling approximation functions from the MPC runtime, enabling scalable definition and end-to-end automated search of approximation strategies with dynamic trade-offs between accuracy and efficiency. The approach integrates PyTorch 2’s compiler infrastructure, lightweight auto-tuning algorithms, and protocol-aware optimizations. Contribution/Results: Experiments show end-to-end speedups of 3.22×–8.6× over CrypTen; auto-tuning further delivers 1.2×–1.8× acceleration. Our framework significantly advances the practical deployment of efficient, high-fidelity secure inference.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) involves private data and proprietary model parameters. MPC-based ML allows multiple parties to collaboratively run an ML workload without sharing their private data or model parameters using multi-party computing (MPC). Because MPC cannot natively run ML operations such as Softmax or GELU, existing frameworks use different approximations. Our study shows that, on a well-optimized framework, these approximations often become the dominating bottleneck. Popular approximations are often insufficiently accurate or unnecessarily slow, and these issues are hard to identify and fix in existing frameworks. To tackle this issue, we propose a compiler for MPC-based ML, CrypTorch. CrypTorch disentangles these approximations with the rest of the MPC runtime, allows easily adding new approximations through its programming interface, and automatically selects approximations to maximize both performance and accuracy. Built as an extension to PyTorch 2's compiler, we show that CrypTorch's auto-tuning alone provides 1.20--1.7$ imes$ immediate speedup without sacrificing accuracy, and 1.31--1.8$ imes$ speedup when some accuracy degradation is allowed, compared to our well-optimized baseline. Combined with better engineering and adoption of state-of-the-art practices, the entire framework brings 3.22--8.6$ imes$ end-to-end speedup compared to the popular framework, CrypTen.
Problem

Research questions and friction points this paper is trying to address.

Optimizing approximation bottlenecks in MPC-based machine learning frameworks
Automating selection of efficient and accurate approximations for MPC operations
Enhancing performance of privacy-preserving ML without compromising data security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auto-tuning compiler for MPC-based machine learning
Disentangles approximations from MPC runtime operations
Automatically selects approximations for performance and accuracy
🔎 Similar Papers
No similar papers found.
J
Jinyu Liu
The Pennsylvania State University, USA
G
Gang Tan
The Pennsylvania State University, USA
Kiwan Maeng
Kiwan Maeng
Pennsylvania State University
Privacy-preserving MLsystems for MLcompilersembedded systems