This letter presents a novel algorithm for the continuous control of dynamical systems that combines Trajectory Optimization (TO) and Reinforcement Learning (RL) in a single framework. The motivations behind this algorithm are the two main limitations of TO and RL when applied to continuous nonlinear systems to minimize a non-convex cost function. Specifically, TO can get stuck in poor local minima when the search is not initialized close to a “good” minimum. On the other hand, when dealing with continuous state and control spaces, the RL training process may be excessively long and strongly dependent on the exploration strategy. Thus, our algorithm learns a “good” control policy via TO-guided RL policy search that, when used as initial guess provider for TO, makes the trajectory optimization process less prone to converge to poor local optima. Our method is validated on several reaching problems featuring non-convex obstacle avoidance with different dynamical systems, including a car model with 6D state, and a 3-joint planar manipulator. Our results show the great capabilities of CACTO in escaping local minima, while being more computationally efficient than the Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) RL algorithms

CACTO: Continuous Actor-Critic With Trajectory Optimization—Towards Global Optimality / Grandesso, Gianluigi; Alboni, Elisa; Rosati Papini, Gastone P.; Wensing, Patrick M.; Prete, Andrea Del. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 8:6(2023), pp. 3318-3325. [10.1109/LRA.2023.3266985]

CACTO: Continuous Actor-Critic With Trajectory Optimization—Towards Global Optimality

Grandesso, Gianluigi
;
Alboni, Elisa;Rosati Papini, Gastone P.;Prete, Andrea Del
2023-01-01

Abstract

This letter presents a novel algorithm for the continuous control of dynamical systems that combines Trajectory Optimization (TO) and Reinforcement Learning (RL) in a single framework. The motivations behind this algorithm are the two main limitations of TO and RL when applied to continuous nonlinear systems to minimize a non-convex cost function. Specifically, TO can get stuck in poor local minima when the search is not initialized close to a “good” minimum. On the other hand, when dealing with continuous state and control spaces, the RL training process may be excessively long and strongly dependent on the exploration strategy. Thus, our algorithm learns a “good” control policy via TO-guided RL policy search that, when used as initial guess provider for TO, makes the trajectory optimization process less prone to converge to poor local optima. Our method is validated on several reaching problems featuring non-convex obstacle avoidance with different dynamical systems, including a car model with 6D state, and a 3-joint planar manipulator. Our results show the great capabilities of CACTO in escaping local minima, while being more computationally efficient than the Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) RL algorithms
2023
6
Grandesso, Gianluigi; Alboni, Elisa; Rosati Papini, Gastone P.; Wensing, Patrick M.; Prete, Andrea Del
CACTO: Continuous Actor-Critic With Trajectory Optimization—Towards Global Optimality / Grandesso, Gianluigi; Alboni, Elisa; Rosati Papini, Gastone P.; Wensing, Patrick M.; Prete, Andrea Del. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 8:6(2023), pp. 3318-3325. [10.1109/LRA.2023.3266985]
File in questo prodotto:
File Dimensione Formato  
CACTO.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 8.54 MB
Formato Adobe PDF
8.54 MB Adobe PDF Visualizza/Apri
CACTO_Continuous_Actor-Critic_With_Trajectory_OptimizationTowards_Global_Optimality.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/376907
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact