Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

📅 2022-07-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In remote telemedicine, low-quality electrocardiograms (ECGs) are often identified only after patient discharge, necessitating costly and burdensome return visits—especially in underserved rural areas. To address this, we propose a real-time AI-based ECG quality assessment system designed specifically for frontline healthcare technicians. The system integrates a deep learning classifier with Grad-CAM-based saliency visualization to automatically detect substandard ECGs and precisely localize artifact-prone regions. Notably, this work presents the first longitudinal, field-deployed explainable AI (XAI) study targeting non-AI-expert clinicians, co-designing interpretability features through human-centered interviews and clinical workflow integration—transforming technical explanations into actionable diagnostic decision support. Validated in a Brazilian telemedicine setting, the system significantly improves technician re-evaluation efficiency; saliency maps demonstrably enhance user trust and operational accuracy. This study establishes a practical, deployable XAI paradigm for AI-assisted diagnosis in resource-constrained environments.
📝 Abstract
When using medical images for diagnosis, either by clinicians or artificial intelligence (AI) systems, it is important that the images are of high quality. When an image is of low quality, the medical exam that produced the image often needs to be redone. In telemedicine, a common problem is that the quality issue is only flagged once the patient has left the clinic, meaning they must return in order to have the exam redone. This can be especially difficult for people living in remote regions, who make up a substantial portion of the patients at Portal Telemedicina, a digital healthcare organization based in Brazil. In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at Portal Telemedicina, and (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics in the context of understanding low-quality medical exams. To the best of our knowledge, this would be the first longitudinal study on evaluating the effects of XAI methods on end-users – stakeholders that use AI systems but do not have AI-specific expertise. We welcome feedback and suggestions on our experimental setup. identifying stakeholders, (ii) engaging with each stakeholder, and (iii) understanding the purpose of the explanation. We report on work in progress on developing, deploying and evaluating an AI system for flagging and explaining low-quality medical images. We describe the outcomes of two critical studies in our development process, aimed at answering the following research questions:
Problem

Research questions and friction points this paper is trying to address.

Developing AI to flag low-quality ECG images in real-time
Understanding explanation needs of non-expert stakeholders using AI
Evaluating how XAI explanations affect clinical technician workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI system flags low-quality medical images
Saliency maps explain ECG quality issues
Longitudinal study evaluates XAI effects
🔎 Similar Papers
No similar papers found.