Co-design for Trustworthy AI: An Interpretable and Explainable Tool for Type 2 Diabetes Prediction Using Genomic Polygenic Risk Scores

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited interpretability of polygenic risk scores (PRS) in predicting complex diseases such as type 2 diabetes, which hinders their clinical adoption. To overcome this challenge, the authors propose XPRS, an interpretable PRS framework that, for the first time, applies SHAP (SHapley Additive exPlanations) to decompose PRS contributions down to individual genes and SNPs. The work further integrates the Z-inspection and HUDERIA frameworks to conduct a multidimensional trustworthiness assessment. Through a co-design process incorporating legal, ethical, medical, and technical perspectives, the project establishes a multidisciplinary paradigm for trustworthy AI development and delivers a practical guideline spanning ethical, legal, and technical dimensions. This framework offers an innovative reference for enhancing the interpretability and trustworthy deployment of PRS and other clinical AI systems.
📝 Abstract
The polygenic risk scores (PRS) have emerged as an important methodology for quantifying genetic predisposition to complex traits and clinical disease. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes (T2DM). Studies have demonstrated that PRS can effectively identify individuals at high risk, thereby enabling early screening, personalized treatment, and targeted interventions for diseases with a genetic predisposition. One current limitation of PRS, however, is the lack of interpretability tools. To address this problem for T2DM, researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS). This visualization tool decomposes PRSs into gene-level and single-nucleotide polymorphism (SNP) contribution scores via Shapley Additive Explanations (SHAP), providing granular insights into the specific genetic factors driving an individual's risk profile. We used a co-design approach to assess XPRS trustworthiness by considering legal, medical, ethical, and technical robustness during early design and potential clinical use. For that, we used Z-inspection, an ethically aligned Trustworthy AI co-design methodology, and piloted the Council of Europe's Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA) (Council of Europe (CAI) 2025). The findings of this use-case comprise a comprehensive set of ethical, legal, and technical lessons learned. These insights, identified by a multidisciplinary team of experts (ethics, legal, human rights, computer science, and medical), serve as a framework for designers to navigate future challenges with this and other AI systems. The findings also provide a useful reference for researchers developing explainability frameworks for PRS in diverse clinical contexts.
Problem

Research questions and friction points this paper is trying to address.

polygenic risk scores
interpretability
type 2 diabetes
trustworthy AI
explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Polygenic Risk Score
SHAP
Co-design
Trustworthy AI
🔎 Similar Papers
No similar papers found.
R
Ralf Beuthan
Department of Philosophy at Myongji University, Seoul, S. Korea
M
Megan Coffee
Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, New York, USA
Heejin Kim
Heejin Kim
Korea University
Organic ChemistryFlow Chemistry
N
Na Yeon Kim
Graduate School of Data Science, Seoul National University (SNU), Seoul, S. Korea
P
Pedro Kringen
Trustworthy AI Lab, Østfold University College, Fredrikstad, Norway
Elisabeth Hildt
Elisabeth Hildt
Illinois Institute of Technology & L3S Research Center
H
Haekyung Lee
Division of Nephrology, Department of Internal Medicine, Soonchunhyang University Seoul Hospital, Seoul, S. Korea
S
Seunggeun Lee
Graduate School of Data Science, Seoul National University (SNU), Seoul, S. Korea
E
Emilie Wiinblad Mathez
Z-inspection® Initiative, Geneva, Switzerland
S
Sira Maliphol
Graduate School of Engineering Practice, Seoul National University (SNU), Seoul, S. Korea
V
Vadim Pak
Council of Europe, Administrator in the Committee on Artificial Intelligence/Steering Committee on New and Emerging Digital Technologies, France
Yuna Park
Yuna Park
Yonsei University
Large Language ModelsFacial Expression Recognition
S
Stephan Sonnenberg
Seoul National University School of Law, Seoul National University (SNU), Seoul, S. Korea
Jesmin Jahan Tithi
Jesmin Jahan Tithi
Intel Corporation, Stony Brook University
High Performance Computingsoftware-hardware co-designEthics In AIMachine LearningMachine Programming
Magnus Westerlund
Magnus Westerlund
Arcada University of Applied Sciences
Trustworthy AIDistributed Ledger Technologysecurityblockchainautonomous agents
R
Roberto V. Zicari
Graduate School of Data Science, Seoul National University (SNU), Seoul, S. Korea