Healthy Distrust in AI systems

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary AI research overemphasizes “trust-building” while neglecting legitimate societal skepticism toward AI systems. This paper introduces the concept of “healthy distrust”—a normative stance of principled, context-sensitive skepticism toward AI, grounded in respect for individual autonomy. Through interdisciplinary conceptual analysis spanning computer science, sociology, and philosophy, we develop the first theoretical framework for healthy distrust, rigorously defining its scope, normative boundaries, and epistemic justification. We demonstrate that well-grounded distrust is not antithetical to trust but may constitute a necessary precondition for it. Our work addresses a critical gap in AI trustworthiness research by establishing the legitimacy of structural distrust, thereby offering a novel paradigm for AI ethics design, regulatory policy, and human-AI interaction practice. (149 words)

Technology Category

Application Category

📝 Abstract
Under the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone -- neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person's interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term"healthy distrust"to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.
Problem

Research questions and friction points this paper is trying to address.

Exploring justified distrust in AI systems' social contexts
Defining 'healthy distrust' as necessary for meaningful trust
Conceptualizing distrust to protect human autonomy in AI usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces 'healthy distrust' in AI systems
Analyzes trust across multiple disciplines
Emphasizes human autonomy in AI usage
🔎 Similar Papers
No similar papers found.