🤖 AI Summary
Empirical evidence on the practical adoption of generative AI (GenAI) in software engineering (SE) remains scarce. Method: We conduct the first large-scale mixed-methods study (N=1,278 surveys, 32 in-depth interviews, and tool usage log analysis) to systematically examine GenAI integration across core SE activities—implementation, verification, and maintenance. Contribution/Results: Our findings reveal deep yet uneven integration, exposing a critical governance gap between tool institutionalization and engineer capability development. GenAI demonstrably accelerates development cycles and augments knowledge work; however, output unreliability, high prompt-engineering expertise requirements, and substantial verification overhead persist as key bottlenecks. Contrary to automation fears, practitioners anticipate role evolution—not job displacement. This study establishes the first empirically grounded, large-scale foundation for responsible GenAI adoption in SE, offering actionable governance insights for practitioners, tool developers, and policymakers.
📝 Abstract
Context. GenAI tools are being increasingly adopted by practitioners in SE, promising support for several SE activities. Despite increasing adoption, we still lack empirical evidence on how GenAI is used in practice, the benefits it provides, the challenges it introduces, and its broader organizational and societal implications. Objective. This study aims to provide an overview of the status of GenAI adoption in SE. It investigates the status of GenAI adoption, associated benefits and challenges, institutionalization of tools and techniques, and anticipated long term impacts on SE professionals and the community. Results. The results indicate a wide adoption of GenAI tools and how they are deeply integrated into daily SE work, particularly for implementation, verification and validation, personal assistance, and maintenance-related tasks. Practitioners report substantial benefits, most notably reduction in cycle time, quality improvements, enhanced support in knowledge work, and productivity gains. However, objective measurement of productivity and quality remains limited in practice. Significant challenges persist, including incorrect or unreliable outputs, prompt engineering difficulties, validation overhead, security and privacy concerns, and risks of overreliance. Institutionalization of tools and techniques seems to be common, but it varies considerably, with a strong focus on tool access and less emphasis on training and governance. Practitioners expect GenAI to redefine rather than replace their roles, while expressing moderate concern about job market contraction and skill shifts.