🤖 AI Summary
To address the dual challenges of limited on-device computational resources and sparse user data in mobile LLM personalization, this paper proposes an explainability-guided pre-personalization model selection method. Our approach leverages interpretability signals—such as gradient sensitivity and feature attribution—extracted during lightweight fine-tuning to intelligently select the most suitable candidate model for personalization, thereby eliminating the need for from-scratch training. The method integrates model similarity measurement, efficient fine-tuning evaluation, and edge-aware deployment optimization. Evaluated on mainstream smartphones, it reduces on-device personalization computation cost by 83% and improves data efficiency by 51%. By grounding model selection in interpretable signals, our method significantly enhances both the practicality and transparency of personalized LLMs under resource-constrained conditions.
📝 Abstract
Personalization of Large Language Models (LLMs) is important in practical applications to accommodate the individual needs of different mobile users. Due to data privacy concerns, LLM personalization often needs to be locally done at the user's mobile device, but such on-device personalization is constrained by both the limitation of on-device compute power and insufficiency of user's personal data. In this paper, we address these constraints by fine-tuning an already personalized LLM with user's personal data, and present XPerT, a new technique that ensure proper selection of such already personalized LLMs based on explainability about how they were being fine-tuned. We implemented and evaluated XPerT on various smartphone models with mainstream LLMs, and experiment results show that XPerT reduces the computation costs of on-device LLM personalization by 83%, and improves its data efficiency by 51%.