Trust and Transparency in AI: Industry Voices on Data, Ethics, and Compliance

๐Ÿ“… 2025-09-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Rapid AI industrial deployment has outpaced ethical assessment, exposing systemic risks including accountability gaps, governance failures, poor data quality, weakened human oversight, insufficient technical robustness, and negative environmental and societal externalities (e.g., high energy consumption, exacerbation of inequality); regulatory ambiguity, low transparency, and excessive technical dependence further compound compliance and safety challenges. This study employs semi-structured interviews with 15 domain experts and a systematic literature review to identify critical tensions in data ethics, regulatory implementation, and governance practice. Its primary contribution is the novel โ€œDual-Dimensional Coordinationโ€ framework for trustworthy AI: Dimension One embeds regulatory compliance requirements, while Dimension Two aligns with local sociocultural values. The framework institutionalizes the coupling of transparency mechanisms and responsibility allocation, offering an actionable pathway toward a human-centered, robust, and sustainable AI ecosystem.

Technology Category

Application Category

๐Ÿ“ Abstract
The EU Artificial Intelligence (AI) Act directs businesses to assess their AI systems to ensure they are developed in a way that is human-centered and trustworthy. The rapid adoption of AI in the industry has outpaced ethical evaluation frameworks, leading to significant challenges in accountability, governance, data quality, human oversight, technological robustness, and environmental and societal impacts. Through structured interviews with fifteen industry professionals, paired with a literature review conducted on each of the key interview findings, this paper investigates practical approaches and challenges in the development and assessment of Trustworthy AI (TAI). The findings from participants in our study, and the subsequent literature reviews, reveal complications in risk management, compliance and accountability, which are exacerbated by a lack of transparency, unclear regulatory requirements and a rushed implementation of AI. Participants reported concerns that technological robustness and safety could be compromised by model inaccuracies, security vulnerabilities, and an overreliance on AI without proper safeguards in place. Additionally, the negative environmental and societal impacts of AI, including high energy consumption, political radicalisation, loss of culture and reinforcement of social inequalities, are areas of concern. There is a pressing need not just for risk mitigation and TAI evaluation within AI systems but for a wider approach to developing an AI landscape that aligns with the social and cultural values of the countries adopting those technologies.
Problem

Research questions and friction points this paper is trying to address.

Investigating practical challenges in developing trustworthy AI systems
Addressing AI accountability gaps caused by lack of transparency
Mitigating negative environmental and societal impacts of AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducted structured interviews with industry professionals
Paired interviews with literature review findings
Investigated practical trustworthy AI development approaches
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Louise McCormack
ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland.
D
Diletta Huyskes
University of Milan, Milan, Italy.
Dave Lewis
Dave Lewis
ADAPT Research Centre, Trinity College Dublin, Dublin, Ireland.
Malika Bendechache
Malika Bendechache
Assistant Prof. University of Galway
Big Data AnalyticsMachine Learning in HealthcareData GovernanceAI Ethics & Trustworthiness.