SecFwT: Efficient Privacy-Preserving Fine-Tuning of Large Language Models Using Forward-Only Passes

πŸ“… 2025-06-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational overhead of secure multi-party computation (MPC) that severely constrains large language model (LLM) fine-tuning in privacy-sensitive domains such as healthcare and finance, this paper proposes Forward-only Fine-tuningβ€”a novel paradigm that entirely bypasses backward propagation and optimizer updates under privacy protection. Our method introduces two key innovations: (1) an MPC-friendly random feature attention mechanism that replaces standard softmax attention with significantly lower nonlinear computation overhead; and (2) an end-to-end privacy-preserving lightweight fine-tuning architecture. Experiments demonstrate that our approach achieves several-fold speedup under strict MPC security guarantees, substantially enhancing the scalability of LLM fine-tuning in privacy-constrained settings. This work provides an efficient and practical pathway for deploying LLMs in sensitive application domains.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have transformed numerous fields, yet their adaptation to specialized tasks in privacy-sensitive domains, such as healthcare and finance, is constrained by the scarcity of accessible training data due to stringent privacy requirements. Secure multi-party computation (MPC)-based privacy-preserving machine learning offers a powerful approach to protect both model parameters and user data, but its application to LLMs has been largely limited to inference, as fine-tuning introduces significant computational challenges, particularly in privacy-preserving backward propagation and optimizer operations. This paper identifies two primary obstacles to MPC-based privacy-preserving fine-tuning of LLMs: (1) the substantial computational overhead of backward and optimizer processes, and (2) the inefficiency of softmax-based attention mechanisms in MPC settings. To address these challenges, we propose SecFwT, the first MPC-based framework designed for efficient, privacy-preserving LLM fine-tuning. SecFwT introduces a forward-only tuning paradigm to eliminate backward and optimizer computations and employs MPC-friendly Random Feature Attention to approximate softmax attention, significantly reducing costly non-linear operations and computational complexity. Experimental results demonstrate that SecFwT delivers substantial improvements in efficiency and privacy preservation, enabling scalable and secure fine-tuning of LLMs for privacy-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Privacy-preserving fine-tuning of LLMs in sensitive domains
High computational overhead in MPC-based backward propagation
Inefficient softmax attention in secure multi-party computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward-only tuning eliminates backward computations
MPC-friendly Random Feature Attention replaces softmax
Significantly reduces non-linear operations complexity
πŸ”Ž Similar Papers
No similar papers found.