Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations based on case-based reasoning, such as "this bird looks like those sparrows". However, a major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency, where multiple visual features are inappropriately mixed (e.g., a bird's head and wings treated as a single concept). This inconsistency breaks the alignment between model reasoning and human understanding. Furthermore, users have specific preferences for how concepts should look, yet current approaches provide no mechanism for incorporating their feedback. To address these issues, we introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts - the visual concepts used by the model - according to user needs. By incorporating user supervision, YoursProtoP adapts and splits concepts used for both prediction and explanation to better match the user's preferences and understanding. Through experiments on both the synthetic FunnyBirds dataset and a real-world scenario using the CUB, CARS, and PETS datasets in a comprehensive user study, we demonstrate the effectiveness of YoursProtoP in achieving concept consistency without compromising the accuracy of the model.

Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks / Michalski, Tomasz; Wróbel, Adam; Bontempelli, Andrea; Luśtyk, Jakub; Kniejski, Mikolaj; Teso, Stefano; Passerini, Andrea; Zieliński, Bartosz; Rymarczyk, Dawid. - (2025).

Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks

Andrea Bontempelli;Stefano Teso;Andrea Passerini;
2025-01-01

Abstract

Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations based on case-based reasoning, such as "this bird looks like those sparrows". However, a major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency, where multiple visual features are inappropriately mixed (e.g., a bird's head and wings treated as a single concept). This inconsistency breaks the alignment between model reasoning and human understanding. Furthermore, users have specific preferences for how concepts should look, yet current approaches provide no mechanism for incorporating their feedback. To address these issues, we introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts - the visual concepts used by the model - according to user needs. By incorporating user supervision, YoursProtoP adapts and splits concepts used for both prediction and explanation to better match the user's preferences and understanding. Through experiments on both the synthetic FunnyBirds dataset and a real-world scenario using the CUB, CARS, and PETS datasets in a comprehensive user study, we demonstrate the effectiveness of YoursProtoP in achieving concept consistency without compromising the accuracy of the model.
2025
None
Arxiv
Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks / Michalski, Tomasz; Wróbel, Adam; Bontempelli, Andrea; Luśtyk, Jakub; Kniejski, Mikolaj; Teso, Stefano; Passerini, Andrea; Zieliński, Bartosz; Rymarczyk, Dawid. - (2025).
Michalski, Tomasz; Wróbel, Adam; Bontempelli, Andrea; Luśtyk, Jakub; Kniejski, Mikolaj; Teso, Stefano; Passerini, Andrea; Zieliński, Bartosz; Rymarczy...espandi
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/465600
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact