🤖 AI Summary
To address the challenge of personalizing large language models (LLMs) under sparse user interaction data, this paper proposes a progressive personalization framework. First, it performs meso-scale clustering based on user preferences to construct group-level representations. Then, it dynamically couples group-level and user-level low-rank adaptation (LoRA) modules via a dual-router mechanism—comprising a user-aware router and a LoRA-aware router. Inspired by sociological meso-theory, the method adopts a Mixture-of-Experts (MoE) architecture that jointly learns user grouping and hierarchical LoRA adaptation. Evaluated across multiple tasks, the approach significantly outperforms state-of-the-art methods, achieving high personalization accuracy, parameter efficiency, and strong generalization—even with extremely limited interaction data.
📝 Abstract
Personalized large language models (LLMs) aim to tailor their outputs to user preferences. Recent advances in parameter-efficient fine-tuning (PEFT) methods have highlighted the effectiveness of adapting population-level LLMs to personalized LLMs by fine-tuning user-specific parameters with user history. However, user data is typically sparse, making it challenging to adapt LLMs to specific user patterns. To address this challenge, we propose PROgressive PERsonalization (PROPER), a novel progressive learning framework inspired by meso-level theory in social science. PROPER bridges population-level and user-level models by grouping users based on preferences and adapting LLMs in stages. It combines a Mixture-of-Experts (MoE) structure with Low Ranked Adaptation (LoRA), using a user-aware router to assign users to appropriate groups automatically. Additionally, a LoRA-aware router is proposed to facilitate the integration of individual user LoRAs with group-level LoRAs. Experimental results show that PROPER significantly outperforms SOTA models across multiple tasks, demonstrating the effectiveness of our approach.