While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and humaninterpretable global explanations in both synthetic and real world datasets.
Global Explainability of GNNs via Logic Combination of Learned Concepts / Azzolin, Steve; Longa, Antonio; Barbiero, Pietro; Liò, Pietro; Passerini, Andrea. - (2023), pp. 1-16. (Intervento presentato al convegno LOG 2022 tenutosi a Virtual event nel 9–12 December, 2022.).
Global Explainability of GNNs via Logic Combination of Learned Concepts
Azzolin, Steve;Longa, Antonio;Passerini, Andrea
2023-01-01
Abstract
While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and humaninterpretable global explanations in both synthetic and real world datasets.File | Dimensione | Formato | |
---|---|---|---|
Global_Explainer___Ext__Abstract_LoG__2022.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.05 MB
Formato
Adobe PDF
|
1.05 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione