Explanation-Driven Interventions for Artificial Intelligence Model Customization: Empowering End-Users to Tailor Black-Box AI in Rhinocytology

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes domains such as healthcare, the lack of human controllability over black-box AI models undermines clinical trust and decision autonomy. Method: This paper proposes an explanation-driven intervention paradigm, redesigning the Rhino-Cyt platform for nasal cytology diagnosis to enable clinicians to directly edit model explanations—generated via customized LIME/SHAP variants—through an interactive visual interface, thereby triggering lightweight on-device retraining and hot reloading of model parameters. Contribution/Results: This work is the first to organically integrate explainability, user-driven intervention, and online model reconfiguration, advancing black-box AI toward controllable, tunable, and trustworthy human-AI symbiosis. Evaluated on real-world pathology data, the approach enables clinicians to complete behavioral corrections in under three minutes on average, improving diagnostic consistency by 27% and reducing misclassification rates by 41%, thereby significantly enhancing clinical trust and decisional autonomy.

Technology Category

Application Category

📝 Abstract
The integration of Artificial Intelligence (AI) in modern society is heavily shifting the way that individuals carry out their tasks and activities. Employing AI-based systems raises challenges that designers and developers must address to ensure that humans remain in control of the interaction process, particularly in high-risk domains. This article presents a novel End-User Development (EUD) approach for black-box AI models through a redesigned user interface in the Rhino-Cyt platform, a medical AI-based decision-support system for medical professionals (more precisely, rhinocytologists) to carry out cell classification. The proposed interface empowers users to intervene in AI decision-making process by editing explanations and reconfiguring the model, influencing its future predictions. This work contributes to Human-Centered AI (HCAI) and EUD by discussing how explanation-driven interventions allow a blend of explainability, user intervention, and model reconfiguration, fostering a symbiosis between humans and user-tailored AI systems.
Problem

Research questions and friction points this paper is trying to address.

Enabling end-users to customize black-box AI models
Improving AI decision-making via user-edited explanations
Fostering human-AI symbiosis in high-risk medical domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explanation-driven interventions for AI customization
User interface for editing AI explanations
Model reconfiguration via end-user interaction
🔎 Similar Papers
2023-12-19AAAI Conference on Artificial IntelligenceCitations: 0