Addressing the need for explainable Machine Learning has emerged as one of the most important research directions in modern Artificial Intelligence (AI). While the current dominant paradigm in the field is based on black-box models, typically in the form of (deep) neural networks, these models lack direct interpretability for human users, i.e., their outcomes (and, even more so, their inner working) are opaque and hard to understand. This is hindering the adoption of AI in safety-critical applications, where high interests are at stake. In these applications, explainable by design models, such as decision trees, may be more suitable, as they provide interpretability. Recent works have proposed the hybridization of decision trees and Reinforcement Learning, to combine the advantages of the two approaches. So far, however, these works have focused on the optimization of those hybrid models. Here, we apply MAP-Elites for diversifying hybrid models over a feature space that captures both the model complexity and its behavioral variability. We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites, comparing its results against existing similar approaches.

Quality Diversity Evolutionary Learning of Decision Trees / Ferigo, Andrea; Custode, Leonardo Lucio; Iacca, Giovanni. - (2023), pp. 425-432. (Intervento presentato al convegno 38th Annual ACM Symposium on Applied Computing, SAC 2023 tenutosi a Tallinn, Estonia nel 27th-31st March 2023) [10.1145/3555776.3577591].

Quality Diversity Evolutionary Learning of Decision Trees

Ferigo, Andrea;Custode, Leonardo Lucio;Iacca, Giovanni
2023-01-01

Abstract

Addressing the need for explainable Machine Learning has emerged as one of the most important research directions in modern Artificial Intelligence (AI). While the current dominant paradigm in the field is based on black-box models, typically in the form of (deep) neural networks, these models lack direct interpretability for human users, i.e., their outcomes (and, even more so, their inner working) are opaque and hard to understand. This is hindering the adoption of AI in safety-critical applications, where high interests are at stake. In these applications, explainable by design models, such as decision trees, may be more suitable, as they provide interpretability. Recent works have proposed the hybridization of decision trees and Reinforcement Learning, to combine the advantages of the two approaches. So far, however, these works have focused on the optimization of those hybrid models. Here, we apply MAP-Elites for diversifying hybrid models over a feature space that captures both the model complexity and its behavioral variability. We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites, comparing its results against existing similar approaches.
2023
SAC '23: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing
New York
Association for Computing Machinery
978-1-4503-9517-5
Ferigo, Andrea; Custode, Leonardo Lucio; Iacca, Giovanni
Quality Diversity Evolutionary Learning of Decision Trees / Ferigo, Andrea; Custode, Leonardo Lucio; Iacca, Giovanni. - (2023), pp. 425-432. (Intervento presentato al convegno 38th Annual ACM Symposium on Applied Computing, SAC 2023 tenutosi a Tallinn, Estonia nel 27th-31st March 2023) [10.1145/3555776.3577591].
File in questo prodotto:
File Dimensione Formato  
2208.12758.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 238.43 kB
Formato Adobe PDF
238.43 kB Adobe PDF Visualizza/Apri
3555776.3577591.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.45 MB
Formato Adobe PDF
1.45 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/352865
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 5
  • OpenAlex ND
social impact