CoSteer: Collaborative Decoding-Time Personalization via Local Delta Steering

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of personalized language model generation on resource-constrained edge devices—where cloud-based large models lack access to local user data while on-device small models suffer from limited generation quality—this paper proposes a decentralized collaborative generation framework. The core innovation is a local delta steering mechanism: during decoding, lightweight steering signals derived from logits differences of the on-device small model dynamically guide and refine the cloud large model’s output, without requiring cloud model fine-tuning. This transforms personalized modeling into an online, device-side optimization problem, enabling low-overhead, privacy-preserving real-time collaboration. Experiments across multiple personalized text generation tasks demonstrate significant improvements in relevance and generation quality, while maintaining high computational efficiency and strict data locality (i.e., no raw user data leaves the device).

Technology Category

Application Category

📝 Abstract
Personalized text generation has become crucial for adapting language models to diverse and evolving users' personal context across cultural, temporal, and contextual dimensions. While existing methods often rely on centralized fine-tuning or static preference alignment, they struggle to achieve real-time adaptation under resource constraints inherent to personal devices. This limitation creates a dilemma: large cloud-based models lack access to localized user-specific information, while small on-device models cannot match the generation quality of their cloud counterparts. To address this dichotomy, we present CoSteer, a novel collaborative framework that enables decoding-time personalization through localized delta steering. Our key insight lies in leveraging the logits difference between personal context-aware and -agnostic outputs from local small models as steering signals for cloud-based LLMs. Specifically, we formulate token-level optimization as an online learning problem, where local delta vectors dynamically adjust the remote LLM's logits within the on-device environment. This approach preserves privacy by transmitting only the final steered tokens rather than raw data or intermediate vectors, while maintaining cloud-based LLMs' general capabilities without fine-tuning. Through comprehensive experiments on various personalized generation tasks, we demonstrate that CoSteer effectively assists LLMs in generating personalized content by leveraging locally stored user profiles and histories, ensuring privacy preservation through on-device data processing while maintaining acceptable computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Achieve real-time personalized text generation on resource-limited devices
Bridge quality gap between cloud and on-device language models
Enable privacy-preserving personalization without cloud fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative framework for decoding-time personalization
Local delta vectors adjust cloud LLM logits
Privacy preserved by transmitting only steered tokens
🔎 Similar Papers
No similar papers found.
H
Hang Lv
University of Science and Technology of China
Sheng Liang
Sheng Liang
CIS LMU Munich & Munich Center for Machine Learning
NLP
H
Hao Wang
University of Science and Technology of China
H
Hongchao Gu
University of Science and Technology of China
Yaxiong Wu
Yaxiong Wu
Huawei (SG) | University of Glasgow | BUAA
Information RetrievalRecSysAgentic AIRL
W
Wei Guo
Huawei Noah’s Ark Lab
D
Defu Lian
University of Science and Technology of China
Y
Yong Liu
Huawei Noah’s Ark Lab
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning