Never Start from Scratch: Expediting On-Device LLM Personalization via Explainable Model Selection

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of limited on-device computational resources and sparse user data in mobile LLM personalization, this paper proposes an explainability-guided pre-personalization model selection method. Our approach leverages interpretability signals—such as gradient sensitivity and feature attribution—extracted during lightweight fine-tuning to intelligently select the most suitable candidate model for personalization, thereby eliminating the need for from-scratch training. The method integrates model similarity measurement, efficient fine-tuning evaluation, and edge-aware deployment optimization. Evaluated on mainstream smartphones, it reduces on-device personalization computation cost by 83% and improves data efficiency by 51%. By grounding model selection in interpretable signals, our method significantly enhances both the practicality and transparency of personalized LLMs under resource-constrained conditions.

Technology Category

Application Category

📝 Abstract
Personalization of Large Language Models (LLMs) is important in practical applications to accommodate the individual needs of different mobile users. Due to data privacy concerns, LLM personalization often needs to be locally done at the user's mobile device, but such on-device personalization is constrained by both the limitation of on-device compute power and insufficiency of user's personal data. In this paper, we address these constraints by fine-tuning an already personalized LLM with user's personal data, and present XPerT, a new technique that ensure proper selection of such already personalized LLMs based on explainability about how they were being fine-tuned. We implemented and evaluated XPerT on various smartphone models with mainstream LLMs, and experiment results show that XPerT reduces the computation costs of on-device LLM personalization by 83%, and improves its data efficiency by 51%.
Problem

Research questions and friction points this paper is trying to address.

Expediting on-device LLM personalization via model selection
Reducing computation costs of personalizing LLMs on mobile devices
Improving data efficiency for on-device LLM fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning personalized LLMs with user data
Explainable model selection for efficiency
Reduces computation costs by 83%
🔎 Similar Papers
No similar papers found.
Haoming Wang
Haoming Wang
University of Pittsburgh
Federated learning
Boyuan Yang
Boyuan Yang
University of Pittsburgh
X
Xiangyu Yin
University of Pittsburgh
W
Wei Gao
University of Pittsburgh