🤖 AI Summary
This study addresses the dynamic interplay of mutual trust and distrust between humans and artificial intelligence in AI governance, challenging conventional unidirectional models that treat AI solely as an object of trust. It proposes that AI can also function as an agentic subject capable of exercising trust or distrust. By integrating perspectives from philosophy, governance theory, human-computer interaction, and sociotechnical systems, the work develops an analytical framework that uncovers the core tensions and structural dilemmas this bidirectional mechanism generates within AI regulation. The research provides a theoretical foundation for understanding the emerging politics of trust in AI governance and identifies critical challenges that future institutional designs must confront.
📝 Abstract
Policy makers, scientists, and the public are increasingly confronted with thorny questions about the regulation of artificial intelligence (AI) systems. A key common thread concerns whether AI can be trusted and the factors that can make it more trustworthy in front of stakeholders and users. This is indeed crucial, as the trustworthiness of AI systems is fundamental for both democratic governance and for the development and deployment of AI. This article advances the discussion by arguing that AI systems should also be recognized, as least to some extent, as artifacts capable of exercising a form of agency, thereby enabling them to engage in relationships of trust or distrust with humans. It further examines the implications of these reciprocal trust dynamics for regulators tasked with overseeing AI systems. The article concludes by identifying key tensions and unresolved dilemmas that these dynamics pose for the future of AI regulation and governance.