Policy-Aware Generative AI for Safe, Auditable Data Access Governance

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Enterprises face significant challenges in data access governance, requiring simultaneous enforcement of least-privilege principles, regulatory compliance, and auditable traceability. This paper proposes a policy-aware generative AI controller featuring a six-stage reasoning framework that integrates the Gemini 2.0 Flash large language model with hard policy gating and a default-deny mechanism. The framework performs natural-language request parsing, user authentication, data classification, business-purpose validation, regulatory compliance mapping, and risk synthesis. Key innovations include early-stage policy interception and generation of machine-readable decision rationales, ensuring 100% recall for denied requests and zero false approvals of prohibited access. Evaluated across 14 representative use cases, the system achieves 92.9% decision accuracy, 100% functional appropriateness, and 100% compliance adherence; expert assessments confirm high rationale quality, with median response latency under 60 seconds.

Technology Category

Application Category

📝 Abstract
Enterprises need access decisions that satisfy least privilege, comply with regulations, and remain auditable. We present a policy aware controller that uses a large language model (LLM) to interpret natural language requests against written policies and metadata, not raw data. The system, implemented with Google Gemini~2.0 Flash, executes a six-stage reasoning framework (context interpretation, user validation, data classification, business purpose test, compliance mapping, and risk synthesis) with early hard policy gates and deny by default. It returns APPROVE, DENY, CONDITIONAL together with cited controls and a machine readable rationale. We evaluate on fourteen canonical cases across seven scenario families using a privacy preserving benchmark. Results show Exact Decision Match improving from 10/14 to 13/14 (92.9%) after applying policy gates, DENY recall rising to 1.00, False Approval Rate on must-deny families dropping to 0, and Functional Appropriateness and Compliance Adherence at 14/14. Expert ratings of rationale quality are high, and median latency is under one minute. These findings indicate that policy constrained LLM reasoning, combined with explicit gates and audit trails, can translate human readable policies into safe, compliant, and traceable machine decisions.
Problem

Research questions and friction points this paper is trying to address.

Governs data access decisions using policy-aware AI
Interprets natural language requests against written policies
Ensures safe auditable compliance through constrained reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM interprets natural language requests against policies
Six-stage reasoning framework with early policy gates
Returns auditable decisions with cited controls and rationale
🔎 Similar Papers
No similar papers found.
S
Shames Al Mandalawi
Center for Advanced Research in Entity Resolution and Information Quality (ERIQ), The University of Arkansas at Little Rock, Little Rock, AR, USA
M
Muzakkiruddin Ahmed Mohammed
Center for Advanced Research in Entity Resolution and Information Quality (ERIQ), The University of Arkansas at Little Rock, Little Rock, AR, USA
H
Hendrika Maclean
Center for Advanced Research in Entity Resolution and Information Quality (ERIQ), The University of Arkansas at Little Rock, Little Rock, AR, USA
Mert Can Cakmak
Mert Can Cakmak
University of Arkansas at Little Rock
NLPMachine LearningLLMsResponsible AIRecommender Systems
J
John R. Talburt
Center for Advanced Research in Entity Resolution and Information Quality (ERIQ), The University of Arkansas at Little Rock, Little Rock, AR, USA