Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge – encoding, e.g., safety constraints – can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics. RSs compromise reliability and generalization and, as we show in this paper, they are linked to NeSy models being overconfident about the predicted concepts. Unfortunately, the only trustworthy mitigation strategy requires collecting costly dense supervision over the concepts. Rather than attempting to avoid RSs altogether, we propose to ensure NeSy models are aware of the semantic ambiguity of the concepts they learn, thus enabling their users to identify and distrust low-quality concepts. Starting from three simple desiderata, we derive bears (BE Aware of Reasoning Shortcuts), an ensembling technique that calibrates the model’s concept-level confidence without compromising prediction accuracy, thus encouraging NeSy architectures to be uncertain about concepts affected by RSs. We show empirically that bears improves RS-awareness of several state-of-the-art NeSy models, and also facilitates acquiring informative dense annotations for mitigation purposes.

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts / Marconato, Emanuele; Bortolotti, Samuele; van Krieken, Emile; Vergari, Antonio; Passerini, Andrea; Teso, Stefano. - ELETTRONICO. - (2024). (Intervento presentato al convegno UAI2024 tenutosi a Barcelona, ES nel 16th July-18th July 2024) [10.48550/arXiv.2402.12240].

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts

Marconato, Emanuele
Co-primo
;
Bortolotti, Samuele
Co-primo
;
Vergari, Antonio
Co-ultimo
;
Passerini, Andrea
Co-ultimo
;
Teso, Stefano
Co-ultimo
2024-01-01

Abstract

Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge – encoding, e.g., safety constraints – can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics. RSs compromise reliability and generalization and, as we show in this paper, they are linked to NeSy models being overconfident about the predicted concepts. Unfortunately, the only trustworthy mitigation strategy requires collecting costly dense supervision over the concepts. Rather than attempting to avoid RSs altogether, we propose to ensure NeSy models are aware of the semantic ambiguity of the concepts they learn, thus enabling their users to identify and distrust low-quality concepts. Starting from three simple desiderata, we derive bears (BE Aware of Reasoning Shortcuts), an ensembling technique that calibrates the model’s concept-level confidence without compromising prediction accuracy, thus encouraging NeSy architectures to be uncertain about concepts affected by RSs. We show empirically that bears improves RS-awareness of several state-of-the-art NeSy models, and also facilitates acquiring informative dense annotations for mitigation purposes.
2024
Uncertainty in Artificial Intelligence, 15-19 July 2024, Barcelona, ES
Barcelona, ES
PMLR
Marconato, Emanuele; Bortolotti, Samuele; van Krieken, Emile; Vergari, Antonio; Passerini, Andrea; Teso, Stefano
BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts / Marconato, Emanuele; Bortolotti, Samuele; van Krieken, Emile; Vergari, Antonio; Passerini, Andrea; Teso, Stefano. - ELETTRONICO. - (2024). (Intervento presentato al convegno UAI2024 tenutosi a Barcelona, ES nel 16th July-18th July 2024) [10.48550/arXiv.2402.12240].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/428250
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact