Less but Better: Parameter-Efficient Fine-Tuning of Large Language Models for Personality Detection

๐Ÿ“… 2025-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from high computational cost and unpredictable performance when fully fine-tuned for personality detection. To address this, we propose PersLLM, a parameter-efficient framework. Methodologically, PersLLM introduces (1) a novel dynamic memory layer that caches and reuses high-dimensional LLM representations, decoupling feature extraction from downstream adaptation; and (2) a modular, lightweight output network that enhances fine-tuning predictability and cross-dataset generalization. By integrating LLM representation extraction, dynamic memory mechanisms, and parameter-efficient fine-tuning (PEFT), PersLLM achieves state-of-the-art performance on benchmarks including Kaggle and Pandoraโ€”while reducing training FLOPs by approximately 68%. It further demonstrates strong robustness across diverse personality traits and data distributions. This framework provides an efficient, scalable, and reliable solution for personality modeling under resource-constrained settings.

Technology Category

Application Category

๐Ÿ“ Abstract
Personality detection automatically identifies an individual's personality from various data sources, such as social media texts. However, as the parameter scale of language models continues to grow, the computational cost becomes increasingly difficult to manage. Fine-tuning also grows more complex, making it harder to justify the effort and reliably predict outcomes. We introduce a novel parameter-efficient fine-tuning framework, PersLLM, to address these challenges. In PersLLM, a large language model (LLM) extracts high-dimensional representations from raw data and stores them in a dynamic memory layer. PersLLM then updates the downstream layers with a replaceable output network, enabling flexible adaptation to various personality detection scenarios. By storing the features in the memory layer, we eliminate the need for repeated complex computations by the LLM. Meanwhile, the lightweight output network serves as a proxy for evaluating the overall effectiveness of the framework, improving the predictability of results. Experimental results on key benchmark datasets like Kaggle and Pandora show that PersLLM significantly reduces computational cost while maintaining competitive performance and strong adaptability.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost in large language model fine-tuning
Improving adaptability for personality detection tasks
Maintaining performance while minimizing parameter complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient fine-tuning framework PersLLM
Dynamic memory layer stores high-dimensional representations
Lightweight replaceable output network for adaptability
๐Ÿ”Ž Similar Papers
No similar papers found.