🤖 AI Summary
This study investigates cross-role (e.g., engineers, product managers, ethics specialists) and cross-national (43 countries, N=414) variations in AI ethics awareness, policy comprehension, and risk mitigation practices within AI development teams. Employing a mixed-methods design, it integrates large-scale surveys with in-depth interviews, combining quantitative statistical analysis and qualitative thematic coding. Results reveal a pronounced role-based ethical responsibility gap and a non-uniform global distribution of regulatory sensitivity and implementation capacity. Building on these findings, the study proposes a “collaborative, role-sensitive ethics governance framework” that mandates multi-stakeholder engagement across the AI lifecycle and incorporates localization mechanisms for contextual adaptation. This framework advances AI ethics practice from prescriptive, one-size-fits-all guidelines toward inclusive, situated governance—offering an actionable, differentiated pathway for global AI policy implementation and responsible innovation.
📝 Abstract
Recent advances in AI applications have raised growing concerns about the need for ethical guidelines and regulations to mitigate the risks posed by these technologies. In this paper, we present a mixed-method survey study - combining statistical and qualitative analyses - to examine the ethical perceptions, practices, and knowledge of individuals involved in various AI development roles. Our survey includes 414 participants from 43 countries, representing roles such as AI managers, analysts, developers, quality assurance professionals, and information security and privacy experts. The results reveal varying degrees of familiarity and experience with AI ethics principles, government initiatives, and risk mitigation strategies across roles, regions, and other demographic factors. Our findings highlight the importance of a collaborative, role-sensitive approach, involving diverse stakeholders in ethical decision-making throughout the AI development lifecycle. We advocate for developing tailored, inclusive solutions to address ethical challenges in AI development, and we propose future research directions and educational strategies to promote ethics-aware AI practices.