Privacy-Preserving Offloading for Large Language Models in 6G Vehicular Networks

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address user data privacy leakage risks arising from LLM computation offloading in 6G vehicular networks, this paper proposes a lightweight privacy-preserving framework integrating federated learning (FL) and differential privacy (DP). The method introduces a privacy-aware task partitioning algorithm and a secure aggregation communication protocol, enabling efficient local–edge collaborative training under stringent onboard resource constraints. Under a strict DP budget of ε = 0.8, the global model achieves 75% convergence accuracy—only 2–3 percentage points lower than centralized training—while maintaining stable per-round communication overhead at 2.1 MB and over 90% of computation performed locally. Experimental results demonstrate a favorable trade-off among privacy guarantees, model accuracy, and system efficiency. The framework provides a deployable, privacy-enhanced paradigm for AI-enabled 6G vehicular systems.

Technology Category

Application Category

📝 Abstract
The integration of Large Language Models (LLMs) in 6G vehicular networks promises unprecedented advancements in intelligent transportation systems. However, offloading LLM computations from vehicles to edge infrastructure poses significant privacy risks, potentially exposing sensitive user data. This paper presents a novel privacy-preserving offloading framework for LLM-integrated vehicular networks. We introduce a hybrid approach combining federated learning (FL) and differential privacy (DP) techniques to protect user data while maintaining LLM performance. Our framework includes a privacy-aware task partitioning algorithm that optimizes the trade-off between local and edge computation, considering both privacy constraints and system efficiency. We also propose a secure communication protocol for transmitting model updates and aggregating results across the network. Experimental results demonstrate that our approach achieves 75% global accuracy with only a 2-3% reduction compared to non-privacy-preserving methods, while maintaining DP guarantees with an optimal privacy budget of $varepsilon = 0.8$. The framework shows stable communication overhead of approximately 2.1MB per round with computation comprising over 90% of total processing time, validating its efficiency for resource-constrained vehicular environments.
Problem

Research questions and friction points this paper is trying to address.

Privacy risks in offloading LLM computations to edge
Protecting sensitive user data in vehicular networks
Optimizing privacy-performance trade-off in 6G systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning and differential privacy hybrid approach
Privacy-aware task partitioning algorithm for computation optimization
Secure communication protocol for model updates transmission
I
Ikhlasse Badidi
School of Science and Engineering, Al Akhawayn University in Ifrane, Morocco
N
Nouhaila El Khiyaoui
School of Science and Engineering, Al Akhawayn University in Ifrane, Morocco
A
Aya Riany
School of Science and Engineering, Al Akhawayn University in Ifrane, Morocco
B
Badr Ben Elallid
Department of Electrical and Computer Engineering, Université du Québec à Trois-Rivières, Trois-Rivières, QC, Canada
Amine Abouaomar
Amine Abouaomar
Assistant Professor, Al Akhawayn University in Ifrane, Morocco
B5G/6GNext-Generation InternetFederated LearningMulti-Agent Reinforcement Learning