🤖 AI Summary
This work addresses the issue of semantic staleness in cloud-edge collaborative recommendation systems, where cached semantic user representations become outdated due to reuse without updates. To tackle this, the authors propose a lightweight on-device semantic calibration mechanism that dynamically assesses the reliability of cached representations and refines them using the latest user interaction data, thereby maintaining high-quality sequential recommendations without requiring real-time invocation of large language models in the cloud. As the first approach to mitigate semantic staleness within a cloud-edge collaborative architecture, the proposed method simultaneously ensures privacy preservation, low-latency response, and strong recommendation performance. Extensive experiments on multiple real-world datasets demonstrate its significant superiority over existing baselines, notably improving ranking quality.
📝 Abstract
Cloud-device collaborative recommendation partitions computation across the cloud and user devices: the cloud provides semantic user modeling, while the device leverages recent interactions and cloud semantic signals for privacy-preserving, responsive reranking. With large language models (LLMs) on the cloud, semantic user representations can improve sequential recommendation by capturing high-level intent. However, regenerating such representations via cloud LLM inference for every request is often infeasible at real-world scale. As a result, on-device reranking commonly reuses a cached cloud semantic user embedding across requests. We empirically identify a cloud semantic staleness effect: reused embeddings become less aligned with the user's latest interactions, leading to measurable ranking degradation. Most existing LLM-enabled cloud-device recommenders are typically designed around on-demand cloud semantics, either by assuming low-latency cloud LLM access or by regenerating semantic embeddings per request. When per-request regeneration is infeasible and cached semantics must be reused, two technical challenges arise: (1) deciding when cached cloud semantics remain useful for on-device reranking, and (2) maintaining ranking quality when the cloud LLM cannot be invoked and only cached semantics are available. To address this gap, we introduce the Semantic Calibration for LLM-enabled Cloud-Device Recommendation (SCaLRec). First, it estimates the reliability of cached semantics under the user's latest interactions. Second, an on-device semantic calibration module is proposed to adjusts the cached semantic embedding on-device using up-to-date interaction evidence, without per-request cloud LLM involvement. Experiments on real-world datasets show that SCaLRec consistently improves recommendation performance over strong baselines under cloud semantic staleness.