Can LLMs Make (Personalized) Access Control Decisions?

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Users face increasing cognitive burden in access control decisions due to system complexity and automation in traditional and agent-based systems. Method: This paper proposes a dynamic, context-aware, personalized access control framework leveraging large language models (LLMs), trained and evaluated on a curated dataset comprising 307 natural-language privacy statements and 14,682 real-world authorization decisions derived from user studies. Contribution/Results: Experimental evaluation shows that LLM-generated decisions align with the majority of user choices at 86% agreement; the personalized variant significantly improves individual-level decision fidelity. Crucially, this work is the first to systematically characterize the trade-off between personalization and compliance with security policies. By integrating contextual understanding, user preferences, and policy constraints, the framework establishes a novel adaptive paradigm for access control that jointly optimizes usability and security.

Technology Category

Application Category

📝 Abstract
Precise access control decisions are crucial to the security of both traditional applications and emerging agent-based systems. Typically, these decisions are made by users during app installation or at runtime. Due to the increasing complexity and automation of systems, making these access control decisions can add a significant cognitive load on users, often overloading them and leading to suboptimal or even arbitrary access control decisions. To address this problem, we propose to leverage the processing and reasoning capabilities of large language models (LLMs) to make dynamic, context-aware decisions aligned with the user's security preferences. For this purpose, we conducted a user study, which resulted in a dataset of 307 natural-language privacy statements and 14,682 access control decisions made by users. We then compare these decisions against those made by two versions of LLMs: a general and a personalized one, for which we also gathered user feedback on 1,446 of its decisions. Our results show that in general, LLMs can reflect users' preferences well, achieving up to 86% accuracy when compared to the decision made by the majority of users. Our study also reveals a crucial trade-off in personalizing such a system: while providing user-specific privacy preferences to the LLM generally improves agreement with individual user decisions, adhering to those preferences can also violate some security best practices. Based on our findings, we discuss design and risk considerations for implementing a practical natural-language-based access control system that balances personalization, security, and utility.
Problem

Research questions and friction points this paper is trying to address.

Reducing cognitive load in access control decisions for users
Leveraging LLMs for dynamic context-aware security decisions
Balancing personalization with security best practices in access control
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs make dynamic context-aware access control decisions
Personalized LLMs align decisions with user security preferences
Natural-language system balances personalization security and utility
🔎 Similar Papers
No similar papers found.
F
Friederike Groschupp
Department of Computer Science, ETH Zurich, Switzerland
Daniele Lain
Daniele Lain
ETH Zurich
SecurityPrivacy
Aritra Dhar
Aritra Dhar
Computing Systems Lab, Huawei Technologies Switzerland AG
L
Lara Magdalena Lazier
Computing Systems Lab, Huawei Technologies Switzerland AG
S
Srdjan Čapkun
Department of Computer Science, ETH Zurich, Switzerland