In the last decade, reinforcement learning (RL) has been used to solve several tasks with human-level performance. However, there is a growing demand for interpretable RL, i.e., there is the need to understand how a RL agent works and the rationale of its decisions. Not only do we need interpretability to assess the safety of such agents, but also we may need it to gain insights into unknown problems. In this work, we propose a novel optimization approach to interpretable RL that builds decision trees. While techniques that optimize decision trees for RL do exist, they usually employ greedy algorithms or do not exploit the rewards given by the environment. For these reasons, these techniques may either get stuck in local optima or be inefficient. On the contrary, our approach is based on a two-level optimization scheme that combines the advantages of evolutionary algorithms with the benefits of Q -learning. This method allows decomposing the problem into two sub-problems: the problem of finding a meaningful decomposition of the state space, and the problem of associating an action to each subspace. We test the proposed method on three well-known RL benchmarks, as well as on a pandemic control task, on which it results competitive with the state-of-the-art in both performance and interpretability.

Evolutionary learning of interpretable decision trees / Custode, Leonardo Lucio; Iacca, Giovanni. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 6169-6184. [10.1109/ACCESS.2023.3236260]

Evolutionary learning of interpretable decision trees

Custode, Leonardo Lucio;Iacca, Giovanni
2023-01-01

Abstract

In the last decade, reinforcement learning (RL) has been used to solve several tasks with human-level performance. However, there is a growing demand for interpretable RL, i.e., there is the need to understand how a RL agent works and the rationale of its decisions. Not only do we need interpretability to assess the safety of such agents, but also we may need it to gain insights into unknown problems. In this work, we propose a novel optimization approach to interpretable RL that builds decision trees. While techniques that optimize decision trees for RL do exist, they usually employ greedy algorithms or do not exploit the rewards given by the environment. For these reasons, these techniques may either get stuck in local optima or be inefficient. On the contrary, our approach is based on a two-level optimization scheme that combines the advantages of evolutionary algorithms with the benefits of Q -learning. This method allows decomposing the problem into two sub-problems: the problem of finding a meaningful decomposition of the state space, and the problem of associating an action to each subspace. We test the proposed method on three well-known RL benchmarks, as well as on a pandemic control task, on which it results competitive with the state-of-the-art in both performance and interpretability.
2023
Custode, Leonardo Lucio; Iacca, Giovanni
Evolutionary learning of interpretable decision trees / Custode, Leonardo Lucio; Iacca, Giovanni. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 6169-6184. [10.1109/ACCESS.2023.3236260]
File in questo prodotto:
File Dimensione Formato  
2012.07723.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.01 MB
Formato Adobe PDF
1.01 MB Adobe PDF Visualizza/Apri
Evolutionary_learning_of_interpretable_decision_trees.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Creative commons
Dimensione 511.69 kB
Formato Adobe PDF
511.69 kB Adobe PDF Visualizza/Apri
Evolutionary_Learning_of_Interpretable_Decision_Trees (1).pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 5.65 MB
Formato Adobe PDF
5.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/284915
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 14
  • OpenAlex ND
social impact