LLM Company Policies and Policy Implications in Software Organizations

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the absence of organizational policies for securely integrating large language model (LLM)-based chatbots into software development workflows. Through a qualitative multi-case study involving 11 enterprises—including in-depth interviews, internal policy document mining, and textual analysis—it provides the first systematic examination of how enterprise-level LLM policies emerge and the key drivers shaping them. The research identifies a recurring trade-off among security, regulatory compliance, and development efficiency, and distills four core policy dimensions: risk identification, access control, data governance, and auditability. Based on these findings, the study proposes a scalable, enterprise-grade LLM governance framework. Designed for practical deployment, the framework offers managers an adaptable, actionable guide for integrating AI tools into existing software engineering practices while mitigating associated risks.

Technology Category

Application Category

📝 Abstract
The risks associated with adopting large language model (LLM) chatbots in software organizations highlight the need for clear policies. We examine how 11 companies create these policies and the factors that influence them, aiming to help managers safely integrate chatbots into development workflows.
Problem

Research questions and friction points this paper is trying to address.

Examining company policies for LLM chatbot adoption risks
Analyzing factors influencing policy creation in software organizations
Helping managers safely integrate chatbots into development workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examining company policies for LLM chatbots
Analyzing factors influencing policy creation
Helping managers integrate chatbots safely
🔎 Similar Papers
No similar papers found.