In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing evo-RL to be adaptive to different environments. In addition, evo-RL facilitates learning on environments with rewardless states, which makes it more suited for real-world problems with incomplete information. To show that evo-RL leads to state-of-the-art performance, we present the performance of different state-of-the-art reinforcement learning algorithms when operating within evo-RL and compare it with the case when these same algorithms are executed independently. Results show that reinforcement learning algorithms embedded within our evo-RL approach significantly outperform the stand-alone versions of the same RL algorithms on OpenAI Gym control problems with rewardless states constrained by the same computational budget.

EVO-RL: Evolutionary-Driven Reinforcement Learning / Hallawa, Ahmed; Born, Thorsten; Schmeink, Anke; Dartmann, Guido; Peine, Arne; Martin, Lukas; Iacca, Giovanni; Eiben, A. E.; Ascheid, Gerd. - (2021), pp. 153-154. (Intervento presentato al convegno 2021 Genetic and Evolutionary Computation Conference, GECCO 2021 tenutosi a Lille, France, nel 10th-14th July 2021) [10.1145/3449726.3459475].

EVO-RL: Evolutionary-Driven Reinforcement Learning

Iacca, Giovanni;
2021-01-01

Abstract

In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing evo-RL to be adaptive to different environments. In addition, evo-RL facilitates learning on environments with rewardless states, which makes it more suited for real-world problems with incomplete information. To show that evo-RL leads to state-of-the-art performance, we present the performance of different state-of-the-art reinforcement learning algorithms when operating within evo-RL and compare it with the case when these same algorithms are executed independently. Results show that reinforcement learning algorithms embedded within our evo-RL approach significantly outperform the stand-alone versions of the same RL algorithms on OpenAI Gym control problems with rewardless states constrained by the same computational budget.
2021
GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion
New York
Association for Computing Machinery, Inc
9781450383516
Hallawa, Ahmed; Born, Thorsten; Schmeink, Anke; Dartmann, Guido; Peine, Arne; Martin, Lukas; Iacca, Giovanni; Eiben, A. E.; Ascheid, Gerd
EVO-RL: Evolutionary-Driven Reinforcement Learning / Hallawa, Ahmed; Born, Thorsten; Schmeink, Anke; Dartmann, Guido; Peine, Arne; Martin, Lukas; Iacca, Giovanni; Eiben, A. E.; Ascheid, Gerd. - (2021), pp. 153-154. (Intervento presentato al convegno 2021 Genetic and Evolutionary Computation Conference, GECCO 2021 tenutosi a Lille, France, nel 10th-14th July 2021) [10.1145/3449726.3459475].
File in questo prodotto:
File Dimensione Formato  
2007.04725.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.74 MB
Formato Adobe PDF
5.74 MB Adobe PDF Visualizza/Apri
3449726.3459475.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 382.3 kB
Formato Adobe PDF
382.3 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/273714
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact