๐ค AI Summary
Rapid AI industrial deployment has outpaced ethical assessment, exposing systemic risks including accountability gaps, governance failures, poor data quality, weakened human oversight, insufficient technical robustness, and negative environmental and societal externalities (e.g., high energy consumption, exacerbation of inequality); regulatory ambiguity, low transparency, and excessive technical dependence further compound compliance and safety challenges. This study employs semi-structured interviews with 15 domain experts and a systematic literature review to identify critical tensions in data ethics, regulatory implementation, and governance practice. Its primary contribution is the novel โDual-Dimensional Coordinationโ framework for trustworthy AI: Dimension One embeds regulatory compliance requirements, while Dimension Two aligns with local sociocultural values. The framework institutionalizes the coupling of transparency mechanisms and responsibility allocation, offering an actionable pathway toward a human-centered, robust, and sustainable AI ecosystem.
๐ Abstract
The EU Artificial Intelligence (AI) Act directs businesses to assess their AI systems to ensure they are developed in a way that is human-centered and trustworthy. The rapid adoption of AI in the industry has outpaced ethical evaluation frameworks, leading to significant challenges in accountability, governance, data quality, human oversight, technological robustness, and environmental and societal impacts. Through structured interviews with fifteen industry professionals, paired with a literature review conducted on each of the key interview findings, this paper investigates practical approaches and challenges in the development and assessment of Trustworthy AI (TAI). The findings from participants in our study, and the subsequent literature reviews, reveal complications in risk management, compliance and accountability, which are exacerbated by a lack of transparency, unclear regulatory requirements and a rushed implementation of AI. Participants reported concerns that technological robustness and safety could be compromised by model inaccuracies, security vulnerabilities, and an overreliance on AI without proper safeguards in place. Additionally, the negative environmental and societal impacts of AI, including high energy consumption, political radicalisation, loss of culture and reinforcement of social inequalities, are areas of concern. There is a pressing need not just for risk mitigation and TAI evaluation within AI systems but for a wider approach to developing an AI landscape that aligns with the social and cultural values of the countries adopting those technologies.