🤖 AI Summary
To address user data privacy leakage risks arising from LLM computation offloading in 6G vehicular networks, this paper proposes a lightweight privacy-preserving framework integrating federated learning (FL) and differential privacy (DP). The method introduces a privacy-aware task partitioning algorithm and a secure aggregation communication protocol, enabling efficient local–edge collaborative training under stringent onboard resource constraints. Under a strict DP budget of ε = 0.8, the global model achieves 75% convergence accuracy—only 2–3 percentage points lower than centralized training—while maintaining stable per-round communication overhead at 2.1 MB and over 90% of computation performed locally. Experimental results demonstrate a favorable trade-off among privacy guarantees, model accuracy, and system efficiency. The framework provides a deployable, privacy-enhanced paradigm for AI-enabled 6G vehicular systems.
📝 Abstract
The integration of Large Language Models (LLMs) in 6G vehicular networks promises unprecedented advancements in intelligent transportation systems. However, offloading LLM computations from vehicles to edge infrastructure poses significant privacy risks, potentially exposing sensitive user data. This paper presents a novel privacy-preserving offloading framework for LLM-integrated vehicular networks. We introduce a hybrid approach combining federated learning (FL) and differential privacy (DP) techniques to protect user data while maintaining LLM performance. Our framework includes a privacy-aware task partitioning algorithm that optimizes the trade-off between local and edge computation, considering both privacy constraints and system efficiency. We also propose a secure communication protocol for transmitting model updates and aggregating results across the network. Experimental results demonstrate that our approach achieves 75% global accuracy with only a 2-3% reduction compared to non-privacy-preserving methods, while maintaining DP guarantees with an optimal privacy budget of $varepsilon = 0.8$. The framework shows stable communication overhead of approximately 2.1MB per round with computation comprising over 90% of total processing time, validating its efficiency for resource-constrained vehicular environments.