While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.

Global Explainability of GNNs via Logic Combination of Learned Concepts / Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Lio, Pietro; Passerini, Andrea. - (2023), pp. 1-19. (Intervento presentato al convegno 11th International Conference on Learning Representations, ICLR 2023 tenutosi a Kigali, Rwanda nel May 1 — 5, 2023).

Global Explainability of GNNs via Logic Combination of Learned Concepts

Azzolin, Steve;Longa, Antonio;Passerini, Andrea
2023-01-01

Abstract

While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.
2023
11th International Conference on Learning Representations (ICLR 2023)
Appleton, WI USA
International Conference on Learning Representations (ICLR)
9781713899259
Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Lio, Pietro; Passerini, Andrea
Global Explainability of GNNs via Logic Combination of Learned Concepts / Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Lio, Pietro; Passerini, Andrea. - (2023), pp. 1-19. (Intervento presentato al convegno 11th International Conference on Learning Representations, ICLR 2023 tenutosi a Kigali, Rwanda nel May 1 — 5, 2023).
File in questo prodotto:
File Dimensione Formato  
4005_global_explainability_of_gnns_.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 852.23 kB
Formato Adobe PDF
852.23 kB Adobe PDF   Visualizza/Apri
256_global_explainability_of_gnns_ (1).pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400869
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact