Synthesizing Access Control Policies using Large Language Models

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manually authoring access control policies is error-prone and costly, particularly in security-critical systems. Method: This work proposes a zero-shot large language model (LLM)-based automated policy synthesis method, centered on a fine-grained prompt template grounded in AWS IAM policy syntax. The template accommodates two input modalities: structured request lists and natural language descriptions—requiring neither fine-tuning nor in-context examples. Contribution/Results: Experimental evaluation demonstrates that the structured prompting significantly improves policy correctness over generic zero-shot baselines, achieving 100% compliance with syntactic validity and fundamental semantic constraints (e.g., action-resource alignment and permission minimality). The approach enables scalable, low-barrier policy engineering for security-sensitive applications, establishing a novel paradigm for LLM-driven access control automation.

Technology Category

Application Category

📝 Abstract
Cloud compute systems allow administrators to write access control policies that govern access to private data. While policies are written in convenient languages, such as AWS Identity and Access Management Policy Language, manually written policies often become complex and error prone. In this paper, we investigate whether and how well Large Language Models (LLMs) can be used to synthesize access control policies. Our investigation focuses on the task of taking an access control request specification and zero-shot prompting LLMs to synthesize a well-formed access control policy which correctly adheres to the request specification. We consider two scenarios, one which the request specification is given as a concrete list of requests to be allowed or denied, and another in which a natural language description is used to specify sets of requests to be allowed or denied. We then argue that for zero-shot prompting, more precise and structured prompts using a syntax based approach are necessary and experimentally show preliminary results validating our approach.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing access control policies using LLMs
Reducing complexity and errors in manual policy writing
Evaluating zero-shot prompting for policy synthesis accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs synthesize access control policies
Zero-shot prompting for policy generation
Syntax-based prompts improve policy accuracy