ACCESS DENIED INC: The First Benchmark Environment for Sensitivity Awareness

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the tension between regulatory compliance and practical utility when large language models (LLMs) dynamically respond to sensitive information requests in enterprise settings. To this end, we propose the “Sensitivity-Aware” (SA) paradigm. Methodologically, we construct the first enterprise-oriented benchmark for sensitive information governance, integrating multi-granularity document sensitivity annotation, fine-grained permission rule modeling, adversarial query generation, and behavioral consistency evaluation—enabling context-aware, permission-sensitive LLM responses. Key contributions include: (i) the formal definition of SA and establishment of a sensitive-permission alignment evaluation framework; (ii) overcoming limitations of conventional static filtering approaches; and (iii) empirical validation across 12 mainstream LLMs, achieving a 37% improvement in sensitive-request interception accuracy while maintaining 92.3% validity retention for legitimate queries.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly becoming valuable to corporate data management due to their ability to process text from various document formats and facilitate user interactions through natural language queries. However, LLMs must consider the sensitivity of information when communicating with employees, especially given access restrictions. Simple filtering based on user clearance levels can pose both performance and privacy challenges. To address this, we propose the concept of sensitivity awareness (SA), which enables LLMs to adhere to predefined access rights rules. In addition, we developed a benchmarking environment called ACCESS DENIED INC to evaluate SA. Our experimental findings reveal significant variations in model behavior, particularly in managing unauthorized data requests while effectively addressing legitimate queries. This work establishes a foundation for benchmarking sensitivity-aware language models and provides insights to enhance privacy-centric AI systems in corporate environments.
Problem

Research questions and friction points this paper is trying to address.

LLMs must handle sensitive corporate data with access restrictions
Current filtering methods face performance and privacy challenges
Benchmarking environment evaluates sensitivity-aware model behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces sensitivity awareness (SA) for LLMs
Develops ACCESS DENIED INC benchmarking environment
Evaluates model behavior on unauthorized data requests
🔎 Similar Papers
No similar papers found.