PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of personalizing large language models (LLMs) under sparse user interaction data, this paper proposes a progressive personalization framework. First, it performs meso-scale clustering based on user preferences to construct group-level representations. Then, it dynamically couples group-level and user-level low-rank adaptation (LoRA) modules via a dual-router mechanism—comprising a user-aware router and a LoRA-aware router. Inspired by sociological meso-theory, the method adopts a Mixture-of-Experts (MoE) architecture that jointly learns user grouping and hierarchical LoRA adaptation. Evaluated across multiple tasks, the approach significantly outperforms state-of-the-art methods, achieving high personalization accuracy, parameter efficiency, and strong generalization—even with extremely limited interaction data.

Technology Category

Application Category

📝 Abstract
Personalized large language models (LLMs) aim to tailor their outputs to user preferences. Recent advances in parameter-efficient fine-tuning (PEFT) methods have highlighted the effectiveness of adapting population-level LLMs to personalized LLMs by fine-tuning user-specific parameters with user history. However, user data is typically sparse, making it challenging to adapt LLMs to specific user patterns. To address this challenge, we propose PROgressive PERsonalization (PROPER), a novel progressive learning framework inspired by meso-level theory in social science. PROPER bridges population-level and user-level models by grouping users based on preferences and adapting LLMs in stages. It combines a Mixture-of-Experts (MoE) structure with Low Ranked Adaptation (LoRA), using a user-aware router to assign users to appropriate groups automatically. Additionally, a LoRA-aware router is proposed to facilitate the integration of individual user LoRAs with group-level LoRAs. Experimental results show that PROPER significantly outperforms SOTA models across multiple tasks, demonstrating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Addresses sparse user data in personalized LLM adaptation.
Proposes PROPER for progressive user-group-based LLM adaptation.
Enhances LLM personalization via MoE and LoRA integration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive learning framework for personalized LLMs
Group-level adaptation using Mixture-of-Experts
LoRA-aware router for user-group integration
🔎 Similar Papers
No similar papers found.