We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature map’s unique contribution to the alignment between deep neural network’s (DNN) representational geometry and that of humans. We first validate the AIS by showing that the prediction of out-of-sample human similarity judgments is improved when constructing representations using only higher-scoring AIS feature maps identified from a training set. We then compute image-specific heatmaps that visually indicate the areas that correspond to feature maps with higher AIS scores. These maps provide an intuitive explanation of which image areas are more important when it is compared to other images in a cohort. We observe a correspondence between these heatmaps and saliency maps produced by a gaze-prediction model. However, in some cases, meaningful differences emerge, as the dimensions relevant for comparison a...

We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature map’s unique contribution to the alignment between deep neural network’s (DNN) representational geometry and that of humans. We first validate the AIS by showing that the prediction of out-of-sample human similarity judgments is improved when constructing representations using only higher-scoring AIS feature maps identified from a training set. We then compute image-specific heatmaps that visually indicate the areas that correspond to feature maps with higher AIS scores. These maps provide an intuitive explanation of which image areas are more important when it is compared to other images in a cohort. We observe a correspondence between these heatmaps and saliency maps produced by a gaze-prediction model. However, in some cases, meaningful differences emerge, as the dimensions relevant for comparison are not necessarily the most visually salient. To conclude, Alignment Importance improves the prediction of human similarity judgments from DNN embeddings and provides interpretable insights into the relevant information in image space.

Explaining Human Comparisons Using Alignment-Importance Heatmaps / Truong, Nhut; Pesenti, Dario; Hasson, Uri. - In: COMPUTATIONAL BRAIN & BEHAVIOR. - ISSN 2522-087X. - 2025:(2025). [10.1007/s42113-025-00235-x]

Explaining Human Comparisons Using Alignment-Importance Heatmaps

Truong, Nhut
Primo
;
Pesenti, Dario;Hasson, Uri
Ultimo
2025-01-01

Abstract

We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature map’s unique contribution to the alignment between deep neural network’s (DNN) representational geometry and that of humans. We first validate the AIS by showing that the prediction of out-of-sample human similarity judgments is improved when constructing representations using only higher-scoring AIS feature maps identified from a training set. We then compute image-specific heatmaps that visually indicate the areas that correspond to feature maps with higher AIS scores. These maps provide an intuitive explanation of which image areas are more important when it is compared to other images in a cohort. We observe a correspondence between these heatmaps and saliency maps produced by a gaze-prediction model. However, in some cases, meaningful differences emerge, as the dimensions relevant for comparison a...
2025
Truong, Nhut; Pesenti, Dario; Hasson, Uri
Explaining Human Comparisons Using Alignment-Importance Heatmaps / Truong, Nhut; Pesenti, Dario; Hasson, Uri. - In: COMPUTATIONAL BRAIN & BEHAVIOR. - ISSN 2522-087X. - 2025:(2025). [10.1007/s42113-025-00235-x]
File in questo prodotto:
File Dimensione Formato  
truongCBB.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 4.65 MB
Formato Adobe PDF
4.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/449611
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 1
social impact