🤖 AI Summary
To address the challenge of personalized language model generation on resource-constrained edge devices—where cloud-based large models lack access to local user data while on-device small models suffer from limited generation quality—this paper proposes a decentralized collaborative generation framework. The core innovation is a local delta steering mechanism: during decoding, lightweight steering signals derived from logits differences of the on-device small model dynamically guide and refine the cloud large model’s output, without requiring cloud model fine-tuning. This transforms personalized modeling into an online, device-side optimization problem, enabling low-overhead, privacy-preserving real-time collaboration. Experiments across multiple personalized text generation tasks demonstrate significant improvements in relevance and generation quality, while maintaining high computational efficiency and strict data locality (i.e., no raw user data leaves the device).
📝 Abstract
Personalized text generation has become crucial for adapting language models to diverse and evolving users' personal context across cultural, temporal, and contextual dimensions. While existing methods often rely on centralized fine-tuning or static preference alignment, they struggle to achieve real-time adaptation under resource constraints inherent to personal devices. This limitation creates a dilemma: large cloud-based models lack access to localized user-specific information, while small on-device models cannot match the generation quality of their cloud counterparts. To address this dichotomy, we present CoSteer, a novel collaborative framework that enables decoding-time personalization through localized delta steering. Our key insight lies in leveraging the logits difference between personal context-aware and -agnostic outputs from local small models as steering signals for cloud-based LLMs. Specifically, we formulate token-level optimization as an online learning problem, where local delta vectors dynamically adjust the remote LLM's logits within the on-device environment. This approach preserves privacy by transmitting only the final steered tokens rather than raw data or intermediate vectors, while maintaining cloud-based LLMs' general capabilities without fine-tuning. Through comprehensive experiments on various personalized generation tasks, we demonstrate that CoSteer effectively assists LLMs in generating personalized content by leveraging locally stored user profiles and histories, ensuring privacy preservation through on-device data processing while maintaining acceptable computational overhead.