🤖 AI Summary
Enterprises face significant challenges in data access governance, requiring simultaneous enforcement of least-privilege principles, regulatory compliance, and auditable traceability. This paper proposes a policy-aware generative AI controller featuring a six-stage reasoning framework that integrates the Gemini 2.0 Flash large language model with hard policy gating and a default-deny mechanism. The framework performs natural-language request parsing, user authentication, data classification, business-purpose validation, regulatory compliance mapping, and risk synthesis. Key innovations include early-stage policy interception and generation of machine-readable decision rationales, ensuring 100% recall for denied requests and zero false approvals of prohibited access. Evaluated across 14 representative use cases, the system achieves 92.9% decision accuracy, 100% functional appropriateness, and 100% compliance adherence; expert assessments confirm high rationale quality, with median response latency under 60 seconds.
📝 Abstract
Enterprises need access decisions that satisfy least privilege, comply with regulations, and remain auditable. We present a policy aware controller that uses a large language model (LLM) to interpret natural language requests against written policies and metadata, not raw data. The system, implemented with Google Gemini~2.0 Flash, executes a six-stage reasoning framework (context interpretation, user validation, data classification, business purpose test, compliance mapping, and risk synthesis) with early hard policy gates and deny by default. It returns APPROVE, DENY, CONDITIONAL together with cited controls and a machine readable rationale. We evaluate on fourteen canonical cases across seven scenario families using a privacy preserving benchmark. Results show Exact Decision Match improving from 10/14 to 13/14 (92.9%) after applying policy gates, DENY recall rising to 1.00, False Approval Rate on must-deny families dropping to 0, and Functional Appropriateness and Compliance Adherence at 14/14. Expert ratings of rationale quality are high, and median latency is under one minute. These findings indicate that policy constrained LLM reasoning, combined with explicit gates and audit trails, can translate human readable policies into safe, compliant, and traceable machine decisions.