🤖 AI Summary
Contemporary AI-driven medical diagnosis overemphasizes technical accuracy while neglecting critical ethical dimensions—including fairness, privacy preservation, and intersectionality (e.g., the compounded effects of race, gender, and socioeconomic status). Method: This study proposes an interdisciplinary AI clinical decision-support framework integrating insights from social sciences, bioethics, and public health. Centered on “ethical explainability” as a core design principle, it moves beyond purely technical evaluation by systematically embedding intersectionality theory into AI assessment. The framework unifies qualitative social analysis, ethical impact assessment, explainable AI (XAI), and privacy-enhancing computation within a human-centered development lifecycle. Contribution/Results: It enables systematic identification and mitigation of structural health inequities, thereby enhancing trustworthiness, inclusivity, and data security in real-world clinical deployment.
📝 Abstract
The increasing integration of artificial intelligence (AI) into medical diagnostics necessitates a critical examination of its ethical and practical implications. While the prioritization of diagnostic accuracy, as advocated by Sabuncu et al. (2025), is essential, this approach risks oversimplifying complex socio-ethical issues, including fairness, privacy, and intersectionality. This rebuttal emphasizes the dangers of reducing multifaceted health disparities to quantifiable metrics and advocates for a more transdisciplinary approach. By incorporating insights from social sciences, ethics, and public health, AI systems can address the compounded effects of intersecting identities and safeguard sensitive data. Additionally, explainability and interpretability must be central to AI design, fostering trust and accountability. This paper calls for a framework that balances accuracy with fairness, privacy, and inclusivity to ensure AI-driven diagnostics serve diverse populations equitably and ethically.