Reinforcement learning (RL) algorithms often face challenges in efficiently learning effective policies for sparse-reward multi-goal robot manipulation tasks, thus requiring a vast amount of experiences. The state-of-the-art algorithm in the field, Hindsight Experience Replay (HER), addresses this issue by using failed trajectories and replacing the desired goal with hindsight goals. However, HER performs poorly when the desired goal is distant from the initial state. To address this limitation, Hindsight Goal Generation (HGG) has been proposed, which generates a curriculum of goals from already visited states. This curriculum generation is based on a single objective, and does not take obstacles into account. Here, we make a step forward by proposing Multi-Objective Evolutionary Hindsight Experience Replay (MOEHER), a novel curriculum RL algorithm that reformulates curriculum generation considering multiple objectives and obstacles. MOEHER utilizes NSGA-II to generate a curriculum that is optimized w.r.t. four objectives, namely the Q-function, the goal-proximity function, and two distance metrics, while simultaneously satisfying constraints on the obstacles. We evaluate MOEHER on four different sparse-reward robot manipulation tasks, with and without obstacles, and compare it with HER and HGG. The results demonstrate that MOEHER surpasses or performs on par with these methods on the tested tasks.
Multi-Objective Evolutionary Hindsight Experience Replay for Robot Manipulation Tasks / Sayar, Erdi; Iacca, Giovanni; Knoll, Alois. - (2024), pp. 403-411. ( 2024 Genetic and Evolutionary Computation Conference, GECCO 2024 Melbourne 14th July 2024-18th July 2024) [10.1145/3638529.3654045].
Multi-Objective Evolutionary Hindsight Experience Replay for Robot Manipulation Tasks
Iacca, Giovanni;
2024-01-01
Abstract
Reinforcement learning (RL) algorithms often face challenges in efficiently learning effective policies for sparse-reward multi-goal robot manipulation tasks, thus requiring a vast amount of experiences. The state-of-the-art algorithm in the field, Hindsight Experience Replay (HER), addresses this issue by using failed trajectories and replacing the desired goal with hindsight goals. However, HER performs poorly when the desired goal is distant from the initial state. To address this limitation, Hindsight Goal Generation (HGG) has been proposed, which generates a curriculum of goals from already visited states. This curriculum generation is based on a single objective, and does not take obstacles into account. Here, we make a step forward by proposing Multi-Objective Evolutionary Hindsight Experience Replay (MOEHER), a novel curriculum RL algorithm that reformulates curriculum generation considering multiple objectives and obstacles. MOEHER utilizes NSGA-II to generate a curriculum that is optimized w.r.t. four objectives, namely the Q-function, the goal-proximity function, and two distance metrics, while simultaneously satisfying constraints on the obstacles. We evaluate MOEHER on four different sparse-reward robot manipulation tasks, with and without obstacles, and compare it with HER and HGG. The results demonstrate that MOEHER surpasses or performs on par with these methods on the tested tasks.| File | Dimensione | Formato | |
|---|---|---|---|
|
GECCO24_EA_curriculum_RL (main paper).pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
4.96 MB
Formato
Adobe PDF
|
4.96 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



