Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in vehicular edge caching—privacy preservation, high communication overhead, and training interruption caused by frequent disconnections due to high vehicle mobility—this paper proposes a collaborative caching framework integrating a lightweight denoising diffusion probabilistic model (LDPM) with federated knowledge distillation. The framework eliminates raw data transmission by generating privacy-preserving synthetic features locally on each client, thereby significantly reducing model upload frequency and communication load. Concurrently, knowledge distillation enhances global model robustness and mitigates training failures induced by intermittent vehicle connectivity. Experimental results demonstrate that the proposed approach achieves 42% reduction in communication overhead and an 18.7% improvement in cache hit ratio, while satisfying differential privacy guarantees. Moreover, it exhibits strong adaptability to dynamic vehicle speed variations.

Technology Category

Application Category

📝 Abstract
Vehicle edge caching is a promising technology that can significantly reduce the latency for vehicle users (VUs) to access content by pre-caching user-interested content at edge nodes. It is crucial to accurately predict the content that VUs are interested in without exposing their privacy. Traditional federated learning (FL) can protect user privacy by sharing models rather than raw data. However, the training of FL requires frequent model transmission, which can result in significant communication overhead. Additionally, vehicles may leave the road side unit (RSU) coverage area before training is completed, leading to training failures. To address these issues, in this letter, we propose a federated distillation-assisted vehicle edge caching scheme based on lightweight denoising diffusion probabilistic model (LDPM). The simulation results demonstrate that the proposed vehicle edge caching scheme has good robustness to variations in vehicle speed, significantly reducing communication overhead and improving cache hit percentage.
Problem

Research questions and friction points this paper is trying to address.

Predicting vehicle user content interests without privacy exposure
Reducing communication overhead in federated learning for edge caching
Preventing training failures due to vehicle mobility in caching schemes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight DDPM for content prediction
Federated distillation reduces communication overhead
Robust caching scheme for varying vehicle speeds
🔎 Similar Papers
No similar papers found.
X
Xun Li
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China, and also with the School of Information Engineering, Jiangxi Provincial Key Laboratory of Advanced Signal Processing and Intelligent Communications, Nanchang University, Nanchang 330031, China
Q
Qiong Wu
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China, and also with the School of Information Engineering, Jiangxi Provincial Key Laboratory of Advanced Signal Processing and Intelligent Communications, Nanchang University, Nanchang 330031, China
Pingyi Fan
Pingyi Fan
Professor of Electronic Engineering, Tsinghua University
Wireless CommunicationsInformation TheoryComputer Science
Kezhi Wang
Kezhi Wang
Professor, Royal Society Industry Fellow, Brunel University London
Wireless CommunicationEdge ComputingMachine Learning
W
Wen Chen
Department of Electronic Engineering, Shanghai JiaoTong University, Shanghai 200240, China
Khaled B. Letaief
Khaled B. Letaief
Member of US National Academy of Engineering and New Bright Professor of Engineering, HKUST
Wirelesscommunications