π€ AI Summary
This work addresses the privacy risks associated with uploading raw text to large language model (LLM) services, where existing privacy-preserving methods often incur substantial computational overhead or degrade model performance. To overcome these limitations, the authors propose Privacy-Preserving Fine-Tuning (PPFT), a novel framework in which clients generate prompt embeddings via an encoder, and the server performs inference using k-pooling embeddings. The projection module and the LLM are fine-tuned on private data using noisy embeddings, without requiring access to the decoderβs internal parameters. PPFT achieves end-to-end privacy-preserving inference without transmitting raw input text, effectively balancing privacy and utility. Experimental results demonstrate that PPFT incurs negligible performance loss on both general and domain-specific benchmarks, closely approaching the upper bound achievable without noise injection.
π Abstract
Current LLM-based services typically require users to submit raw text regardless of its sensitivity. While intuitive, such practice introduces substantial privacy risks, as unauthorized access may expose personal, medical, or legal information. Although prior defenses strived to mitigate these risks, they often incur substantial computational overhead and degrade model performance. To overcome this privacy-efficiency trade-off, we introduce Privacy-Preserving Fine-Tuning (PPFT), a novel training pipeline that eliminates the need for transmitting raw prompt text while maintaining a favorable balance between privacy preservation and model utility for both clients and service providers. Our approach operates in two stages: first, we train a client-side encoder together with a server-side projection module and LLM, enabling the server to condition on k-pooled prompt embeddings instead of raw text; second, we fine-tune the projection module and LLM on private, domain-specific data using noise-injected embeddings, allowing effective adaptation without exposing plain text prompts and requiring access to the decoder's internal parameters. Extensive experiments on domain-specific and general benchmarks demonstrate that PPFT achieves a striking balance between privacy and utility, maintaining competitive performance with minimal degradation compared to noise-free upper bounds.