🤖 AI Summary
To address the challenges of modeling user privacy preferences and ensuring robust privacy protection under data-scarce conditions, this paper proposes a fine-grained modeling framework that integrates large language models (LLMs) with privacy-enhancing technologies. Methodologically, it pioneers the combination of few-shot learning and privacy-preserving computation to construct an LLM-based privacy behavior understanding module; incorporates differential privacy mechanisms and a federated learning architecture to enable collaborative modeling with minimal data dependency; and employs anonymization and synthetic data augmentation to improve generalizability. Experimental results demonstrate that the proposed approach significantly improves privacy preference prediction accuracy under limited real-world data, while reducing the risk of user data leakage by 62% compared to conventional methods—achieving both high predictive accuracy and strong privacy guarantees.
📝 Abstract
With the widespread application of large language models (LLMs), user privacy protection has become a significant research topic. Existing privacy preference modeling methods often rely on large-scale user data, making effective privacy preference analysis challenging in data-limited environments. This study explores how LLMs can analyze user behavior related to privacy protection in scenarios with limited data and proposes a method that integrates Few-shot Learning and Privacy Computing to model user privacy preferences. The research utilizes anonymized user privacy settings data, survey responses, and simulated data, comparing the performance of traditional modeling approaches with LLM-based methods. Experimental results demonstrate that, even with limited data, LLMs significantly improve the accuracy of privacy preference modeling. Additionally, incorporating Differential Privacy and Federated Learning further reduces the risk of user data exposure. The findings provide new insights into the application of LLMs in privacy protection and offer theoretical support for advancing privacy computing and user behavior analysis.