Self-Explainable Graph Neural Networks (SEGNNs) are popular explainable-by-design GNNs, but their explanations' properties and limitations are not well understood. Our first contribution fills this gap by formalizing the explanations extracted by some popular SE-GNNs, referred to as Minimal Explanations (MEs), and comparing them to established notions of explanations, namely Prime Implicant (PI) and faithful explanations. Our analysis reveals that MEs match PI explanations for a restricted but significant family of tasks. In general, however, they can be less informative than PI explanations and are surprisingly misaligned with widely accepted notions of faithfulness. Although faithful and PI explanations are informative, they are intractable to find and we show that they can be prohibitively large. Given these observations, a natural choice is to augment SE-GNNs with alternative modalities of explanations taking care of SE-GNNs' limitations. To this end, we propose Dual-Channel GNNs that integrate a white-box rule extractor and a standard SE-GNN, adaptively combining both channels. Our experiments show that even a simple instantiation of Dual-Channel GNNs can recover succinct rules and perform on par or better than widely used SE-GNNs.

Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective / Azzolin, Steve; Malhotra, Sagar; Passerini, Andrea; Teso, Stefano. - ELETTRONICO. - 75(2026). ( ICML 2025 Canada 13th July - 19th July 2025).

Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective

Azzolin, Steve
Primo
;
Malhotra, Sagar
Secondo
;
Passerini, Andrea
Penultimo
;
Teso, Stefano
Ultimo
2026-01-01

Abstract

Self-Explainable Graph Neural Networks (SEGNNs) are popular explainable-by-design GNNs, but their explanations' properties and limitations are not well understood. Our first contribution fills this gap by formalizing the explanations extracted by some popular SE-GNNs, referred to as Minimal Explanations (MEs), and comparing them to established notions of explanations, namely Prime Implicant (PI) and faithful explanations. Our analysis reveals that MEs match PI explanations for a restricted but significant family of tasks. In general, however, they can be less informative than PI explanations and are surprisingly misaligned with widely accepted notions of faithfulness. Although faithful and PI explanations are informative, they are intractable to find and we show that they can be prohibitively large. Given these observations, a natural choice is to augment SE-GNNs with alternative modalities of explanations taking care of SE-GNNs' limitations. To this end, we propose Dual-Channel GNNs that integrate a white-box rule extractor and a standard SE-GNN, adaptively combining both channels. Our experiments show that even a simple instantiation of Dual-Channel GNNs can recover succinct rules and perform on par or better than widely used SE-GNNs.
2026
Proceedings of the 42nd International Conference on Machine Learning
Cambridge, Ma
Cambridge MA: JMLR
Azzolin, Steve; Malhotra, Sagar; Passerini, Andrea; Teso, Stefano
Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective / Azzolin, Steve; Malhotra, Sagar; Passerini, Andrea; Teso, Stefano. - ELETTRONICO. - 75(2026). ( ICML 2025 Canada 13th July - 19th July 2025).
File in questo prodotto:
File Dimensione Formato  
IRIS.pdf

accesso aperto

Descrizione: Full paper
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/467013
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact