Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Concept-based explainable neural networks suffer from concept inconsistency—e.g., conflating bird heads and wings into a single concept—leading to explanations misaligned with human cognition; moreover, existing methods lack mechanisms to incorporate users’ personalized preferences regarding concept appearance. This paper introduces the first user-driven concept adaptive segmentation framework, embedding interactive feedback directly into the prototype learning process of ProtoPNet. Through user-annotated guidance, the method enables concept splitting, re-clustering, and consistency regularization, jointly optimizing both interpretability and individual cognitive alignment. Evaluated on FunnyBirds, CUB, CARS, and PETS, our approach achieves significant improvements in concept consistency, boosts user satisfaction by 42%, and maintains classification accuracy without degradation.

Technology Category

Application Category

📝 Abstract
Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations based on case-based reasoning, such as"this bird looks like those sparrows". However, a major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency, where multiple visual features are inappropriately mixed (e.g., a bird's head and wings treated as a single concept). This inconsistency breaks the alignment between model reasoning and human understanding. Furthermore, users have specific preferences for how concepts should look, yet current approaches provide no mechanism for incorporating their feedback. To address these issues, we introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts - the visual concepts used by the model - according to user needs. By incorporating user supervision, YoursProtoP adapts and splits concepts used for both prediction and explanation to better match the user's preferences and understanding. Through experiments on both the synthetic FunnyBirds dataset and a real-world scenario using the CUB, CARS, and PETS datasets in a comprehensive user study, we demonstrate the effectiveness of YoursProtoP in achieving concept consistency without compromising the accuracy of the model.
Problem

Research questions and friction points this paper is trying to address.

Addressing concept inconsistency in interpretable neural networks
Enabling user feedback for personalized visual concepts
Maintaining model accuracy while improving interpretability alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive personalization of prototypical parts
User feedback integration for concept adaptation
Ensures concept consistency without accuracy loss
🔎 Similar Papers
No similar papers found.
T
Tomasz Michalski
Doctoral School of Exact and Natural Sciences, Jagiellonian University, Kraków, Poland
A
Adam Wr'obel
Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
Andrea Bontempelli
Andrea Bontempelli
University of Trento
Machine LearningConcept DriftInteractive Machine Learning
J
Jakub Lu'styk
Transmission Dynamics Poland sp. z o. o., Henryka Pacho ´nskiego 9/K-22, 31-223 Kraków, Poland
M
Mikolaj Kniejski
Faculty of Psychology, University of Warsaw, Poland
Stefano Teso
Stefano Teso
Senior Assistant Professor, University of Trento
Machine LearningExplainable AIInteractive Machine LearningNeuro-Symbolic AIConstraints
Andrea Passerini
Andrea Passerini
Professor, University of Trento
Interactive Machine LearningLearning with Structured DataLearning and Reasoning
B
Bartosz Zieli'nski
Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
D
Dawid Rymarczyk
Ardigen SA, Leona Henryka Sternbacha 1, 30-394 Kraków, Poland