🤖 AI Summary
This study addresses the limited interpretability of polygenic risk scores (PRS) in predicting complex diseases such as type 2 diabetes, which hinders their clinical adoption. To overcome this challenge, the authors propose XPRS, an interpretable PRS framework that, for the first time, applies SHAP (SHapley Additive exPlanations) to decompose PRS contributions down to individual genes and SNPs. The work further integrates the Z-inspection and HUDERIA frameworks to conduct a multidimensional trustworthiness assessment. Through a co-design process incorporating legal, ethical, medical, and technical perspectives, the project establishes a multidisciplinary paradigm for trustworthy AI development and delivers a practical guideline spanning ethical, legal, and technical dimensions. This framework offers an innovative reference for enhancing the interpretability and trustworthy deployment of PRS and other clinical AI systems.
📝 Abstract
The polygenic risk scores (PRS) have emerged as an important methodology for quantifying genetic predisposition to complex traits and clinical disease. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes (T2DM). Studies have demonstrated that PRS can effectively identify individuals at high risk, thereby enabling early screening, personalized treatment, and targeted interventions for diseases with a genetic predisposition. One current limitation of PRS, however, is the lack of interpretability tools. To address this problem for T2DM, researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS). This visualization tool decomposes PRSs into gene-level and single-nucleotide polymorphism (SNP) contribution scores via Shapley Additive Explanations (SHAP), providing granular insights into the specific genetic factors driving an individual's risk profile. We used a co-design approach to assess XPRS trustworthiness by considering legal, medical, ethical, and technical robustness during early design and potential clinical use. For that, we used Z-inspection, an ethically aligned Trustworthy AI co-design methodology, and piloted the Council of Europe's Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA) (Council of Europe (CAI) 2025). The findings of this use-case comprise a comprehensive set of ethical, legal, and technical lessons learned. These insights, identified by a multidisciplinary team of experts (ethics, legal, human rights, computer science, and medical), serve as a framework for designers to navigate future challenges with this and other AI systems. The findings also provide a useful reference for researchers developing explainability frameworks for PRS in diverse clinical contexts.