While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and humaninterpretable global explanations in both synthetic and real world datasets.

Global Explainability of GNNs via Logic Combination of Learned Concepts / Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Liò, Pietro; Passerini, Andrea. - (2023), pp. 1-16. (Intervento presentato al convegno LOG 2022 tenutosi a Virtual event nel 9–12 December, 2022.).

Global Explainability of GNNs via Logic Combination of Learned Concepts

Azzolin, Steve;Longa, Antonio;Passerini, Andrea
2023-01-01

Abstract

While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and humaninterpretable global explanations in both synthetic and real world datasets.
2023
The First Learning on Graphs Conference
SL
SN
Global Explainability of GNNs via Logic Combination of Learned Concepts / Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Liò, Pietro; Passerini, Andrea. - (2023), pp. 1-16. (Intervento presentato al convegno LOG 2022 tenutosi a Virtual event nel 9–12 December, 2022.).
Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Liò, Pietro; Passerini, Andrea
File in questo prodotto:
File Dimensione Formato  
Global_Explainer___Ext__Abstract_LoG__2022.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/364923
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact