Addressing Intersectionality, Explainability, and Ethics in AI-Driven Diagnostics: A Rebuttal and Call for Transdiciplinary Action

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary AI-driven medical diagnosis overemphasizes technical accuracy while neglecting critical ethical dimensions—including fairness, privacy preservation, and intersectionality (e.g., the compounded effects of race, gender, and socioeconomic status). Method: This study proposes an interdisciplinary AI clinical decision-support framework integrating insights from social sciences, bioethics, and public health. Centered on “ethical explainability” as a core design principle, it moves beyond purely technical evaluation by systematically embedding intersectionality theory into AI assessment. The framework unifies qualitative social analysis, ethical impact assessment, explainable AI (XAI), and privacy-enhancing computation within a human-centered development lifecycle. Contribution/Results: It enables systematic identification and mitigation of structural health inequities, thereby enhancing trustworthiness, inclusivity, and data security in real-world clinical deployment.

Technology Category

Application Category

📝 Abstract
The increasing integration of artificial intelligence (AI) into medical diagnostics necessitates a critical examination of its ethical and practical implications. While the prioritization of diagnostic accuracy, as advocated by Sabuncu et al. (2025), is essential, this approach risks oversimplifying complex socio-ethical issues, including fairness, privacy, and intersectionality. This rebuttal emphasizes the dangers of reducing multifaceted health disparities to quantifiable metrics and advocates for a more transdisciplinary approach. By incorporating insights from social sciences, ethics, and public health, AI systems can address the compounded effects of intersecting identities and safeguard sensitive data. Additionally, explainability and interpretability must be central to AI design, fostering trust and accountability. This paper calls for a framework that balances accuracy with fairness, privacy, and inclusivity to ensure AI-driven diagnostics serve diverse populations equitably and ethically.
Problem

Research questions and friction points this paper is trying to address.

Interdisciplinary Integration
Explainability
Ethical Considerations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interdisciplinary Approach
Trustworthy AI
Interpretability
Myles Joshua Toledo Tan
Myles Joshua Toledo Tan
Assistant Professor of Engineering, University of St. La Salle
artificial intelligencemedical computer visioncomputational pathologyengineering education
P
P. Benos
Department of Epidemiology, College of Public Health & Health Professions and College of Medicine, University of Florida, Gainesville, Florida, 32610, United States of America