The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.

Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs / Custode, Leonardo Lucio; Iacca, Giovanni. - (2022), pp. 224-227. (Intervento presentato al convegno GECCO '22 tenutosi a Boston nel 9th - 13th July 2022) [10.1145/3520304.3528897].

Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs

Custode, Leonardo Lucio;Iacca, Giovanni
2022-01-01

Abstract

The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.
2022
GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion
New York
ACM
9781450392686
Custode, Leonardo Lucio; Iacca, Giovanni
Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs / Custode, Leonardo Lucio; Iacca, Giovanni. - (2022), pp. 224-227. (Intervento presentato al convegno GECCO '22 tenutosi a Boston nel 9th - 13th July 2022) [10.1145/3520304.3528897].
File in questo prodotto:
File Dimensione Formato  
3520304.3528897.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 496.3 kB
Formato Adobe PDF
496.3 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/351940
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact