🤖 AI Summary
This study addresses the absence of organizational policies for securely integrating large language model (LLM)-based chatbots into software development workflows. Through a qualitative multi-case study involving 11 enterprises—including in-depth interviews, internal policy document mining, and textual analysis—it provides the first systematic examination of how enterprise-level LLM policies emerge and the key drivers shaping them. The research identifies a recurring trade-off among security, regulatory compliance, and development efficiency, and distills four core policy dimensions: risk identification, access control, data governance, and auditability. Based on these findings, the study proposes a scalable, enterprise-grade LLM governance framework. Designed for practical deployment, the framework offers managers an adaptable, actionable guide for integrating AI tools into existing software engineering practices while mitigating associated risks.
📝 Abstract
The risks associated with adopting large language model (LLM) chatbots in software organizations highlight the need for clear policies. We examine how 11 companies create these policies and the factors that influence them, aiming to help managers safely integrate chatbots into development workflows.