Latent Inter-User Difference Modeling for LLM Personalization

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing personalized LLM approaches rely on explicit language prompts, limiting their ability to capture deep inter-user differences. This work proposes a latent-space user modeling framework: first, user embeddings are constructed via contrastive learning to explicitly model behavioral relativity among users; second, difference-aware soft prompts are generated, and task-relevant features are compressed and filtered using a sparse autoencoder; finally, a lightweight personalized module is injected into a frozen LLM. Crucially, this is the first method to model user heterogeneity in the latent space—rather than at the prompt level—thereby avoiding prompt-engineering biases and enhancing representation generalizability. Evaluated on personalized review generation, the method achieves state-of-the-art performance across BLEU, ROUGE, and human evaluation metrics, demonstrating both effectiveness and robustness.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly integrated into users' daily lives, leading to a growing demand for personalized outputs. Previous work focuses on leveraging a user's own history, overlooking inter-user differences that are crucial for effective personalization. While recent work has attempted to model such differences, the reliance on language-based prompts often hampers the effective extraction of meaningful distinctions. To address these issues, we propose Difference-aware Embedding-based Personalization (DEP), a framework that models inter-user differences in the latent space instead of relying on language prompts. DEP constructs soft prompts by contrasting a user's embedding with those of peers who engaged with similar content, highlighting relative behavioral signals. A sparse autoencoder then filters and compresses both user-specific and difference-aware embeddings, preserving only task-relevant features before injecting them into a frozen LLM. Experiments on personalized review generation show that DEP consistently outperforms baseline methods across multiple metrics. Our code is available at https://github.com/SnowCharmQ/DEP.
Problem

Research questions and friction points this paper is trying to address.

Modeling inter-user differences for LLM personalization
Overcoming reliance on language prompts for user distinction
Enhancing personalized outputs via latent space embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space modeling for user differences
Contrastive embeddings highlight behavioral signals
Sparse autoencoder filters task-relevant features
🔎 Similar Papers
No similar papers found.
Yilun Qiu
Yilun Qiu
National University of Singapore
T
Tianhao Shi
University of Science and Technology of China
X
Xiaoyan Zhao
The Chinese University of Hong Kong
Fengbin Zhu
Fengbin Zhu
National University of Singapore
NLPIRLLMDocument AIAI + Finance
Y
Yang Zhang
National University of Singapore
F
Fuli Feng
University of Science and Technology of China