This part looks alike this: identifying important parts of explained instances and prototypes

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prototype-based explanations often suffer from poor human interpretability due to insufficient focus on salient features. This paper addresses the “lack of focus” problem in prototype-driven explainable AI by proposing a novel method to identify semantically aligned key overlapping regions—termed *alike parts*—between an input instance and its nearest prototype. Our contributions are twofold: (1) We introduce the first prototype selection objective that explicitly incorporates feature attribution scores (e.g., SHAP or LIME) to enhance global prototype diversity; (2) We formally define and extract instance-prototype semantic alignment regions, leveraging a weighted feature overlap matching mechanism for precise localization. Extensive experiments across six benchmark datasets demonstrate significant improvements in human comprehension while maintaining classification accuracy—either stable or slightly improved—relative to baseline methods.

Technology Category

Application Category

📝 Abstract
Although prototype-based explanations provide a human-understandable way of representing model predictions they often fail to direct user attention to the most relevant features. We propose a novel approach to identify the most informative features within prototypes, termed alike parts. Using feature importance scores derived from an agnostic explanation method, it emphasizes the most relevant overlapping features between an instance and its nearest prototype. Furthermore, the feature importance score is incorporated into the objective function of the prototype selection algorithms to promote global prototypes diversity. Through experiments on six benchmark datasets, we demonstrate that the proposed approach improves user comprehension while maintaining or even increasing predictive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Identifying most relevant features in prototype explanations
Enhancing user comprehension without sacrificing predictive accuracy
Promoting global prototype diversity via feature importance scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies informative features in prototypes
Uses feature importance scores for relevance
Enhances prototype diversity via objective function
🔎 Similar Papers
No similar papers found.
J
Jacek Karolczak
Poznan University of Technology, Institute of Computing Science, ul. Piotrowo 2, 60-695 Poznań, Poland
Jerzy Stefanowski
Jerzy Stefanowski
Poznan University of Technology
machine learningdata streamsExplainable AIrule learningimbalanced classification