PRIME: Large Language Model Personalization with Cognitive Memory and Thought Processes

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLM personalization faces challenges due to the absence of a unified theoretical framework and difficulty in modeling dynamic user preference evolution. To address this, we propose PRIME—a novel framework that, for the first time, integrates dual memory mechanisms from cognitive science—episodic and semantic memory—into LLM personalization. PRIME couples these with a personalized reasoning module inspired by “slow thinking,” enabling joint modeling of short-term interactions and long-term belief evolution. The method incorporates historical interaction modeling, dynamic belief updating, and a long-context evaluation benchmark constructed from Reddit’s Change My View (CMV) dataset. Experiments demonstrate that PRIME significantly outperforms existing approaches on our CMV-based long-context benchmark, effectively mitigates popularity bias, and accurately captures fine-grained individual preference dynamics. This work establishes a new paradigm for interpretable and sustainable LLM personalization.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions. While recent efforts have implemented various personalization methods, a unified theoretical framework that can systematically understand the drivers of effective personalization is still lacking. In this work, we integrate the well-established cognitive dual-memory model into LLM personalization, by mirroring episodic memory to historical user engagements and semantic memory to long-term, evolving user beliefs. Specifically, we systematically investigate memory instantiations and introduce a unified framework, PRIME, using episodic and semantic memory mechanisms. We further augment PRIME with a novel personalized thinking capability inspired by the slow thinking strategy. Moreover, recognizing the absence of suitable benchmarks, we introduce a dataset using Change My View (CMV) from Reddit, specifically designed to evaluate long-context personalization. Extensive experiments validate PRIME's effectiveness across both long- and short-context scenarios. Further analysis confirms that PRIME effectively captures dynamic personalization beyond mere popularity biases.
Problem

Research questions and friction points this paper is trying to address.

Lack of unified framework for LLM personalization drivers
Need to integrate cognitive memory models into LLM personalization
Absence of suitable benchmarks for long-context personalization evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates cognitive dual-memory model into LLM personalization
Augments with personalized slow thinking strategy
Introduces CMV dataset for long-context evaluation
🔎 Similar Papers
No similar papers found.
Xinliang Frederick Zhang
Xinliang Frederick Zhang
PhD Candidate, University of Michigan
Natural Language ProcessingMachine LearningComputational LinguisticsComputational Social
N
Nick Beauchamp
Department of Political Science, Northeastern University, Boston, MA
L
Lu Wang
Computer Science and Engineering, University of Michigan, Ann Arbor, MI