Model Inversion in Split Learning for Personalized LLMs: New Insights from Information Bottleneck Theory

📅 2025-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel model inversion attack threat against mobile personalized large language models (e.g., GPT-4) in split learning, arising from the upload of intermediate representations. It introduces mutual information entropy for the first time to quantify privacy leakage at the Transformer module level, systematically evaluating layer-wise privacy risks grounded in information bottleneck theory. We propose a two-stage inversion attack framework: (1) projecting sparse intermediate representations into the embedding space, followed by (2) reconstructing the original input text via conditional generative models (Diffusion or VAE). Experiments demonstrate attack success rates of 38%–75%, surpassing state-of-the-art methods by over 60%. This is the first empirical evidence revealing substantial privacy vulnerabilities in edge-deployed personalized LLMs.

Technology Category

Application Category

📝 Abstract
Personalized Large Language Models (LLMs) have become increasingly prevalent, showcasing the impressive capabilities of models like GPT-4. This trend has also catalyzed extensive research on deploying LLMs on mobile devices. Feasible approaches for such edge-cloud deployment include using split learning. However, previous research has largely overlooked the privacy leakage associated with intermediate representations transmitted from devices to servers. This work is the first to identify model inversion attacks in the split learning framework for LLMs, emphasizing the necessity of secure defense. For the first time, we introduce mutual information entropy to understand the information propagation of Transformer-based LLMs and assess privacy attack performance for LLM blocks. To address the issue of representations being sparser and containing less information than embeddings, we propose a two-stage attack system in which the first part projects representations into the embedding space, and the second part uses a generative model to recover text from these embeddings. This design breaks down the complexity and achieves attack scores of 38%-75% in various scenarios, with an over 60% improvement over the SOTA. This work comprehensively highlights the potential privacy risks during the deployment of personalized LLMs on the edge side.
Problem

Research questions and friction points this paper is trying to address.

Privacy Leakage
Federated Learning
Personalized Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy Attacks
Mutual Information Entropy
Split Learning
🔎 Similar Papers
No similar papers found.
Y
Yunmeng Shu
Shanghai Jiao Tong University, Shanghai, China
Shaofeng Li
Shaofeng Li
Southeast University
AI SecurityBackdoor Attacks
Tian Dong
Tian Dong
Shanghai Jiao Tong University
Computer SecurityMachine Learning
Y
Yan Meng
Shanghai Jiao Tong University, Shanghai, China
H
Haojin Zhu
Shanghai Jiao Tong University, Shanghai, China