Environments in Reinforcement Learning are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about the past. However, providing complete observations of numerous steps can be excessive. Inspired by human memory, we propose to represent history with only important changes in the environment and, in our approach, to obtain automatically this representation using self-supervision. Our method (TempAl) aligns temporally-close frames, revealing a general, slowly varying state of the environment. This procedure is based on contrastive loss, which pulls embeddings of nearby observations to each other while pushing away other samples from the batch. It can be interpreted as a metric that captures the temporal relations of observations. We propose to combine both common instantaneous and our history representation and we evaluate TempAl on all available Atari games from the Arcade Learning Environment. TempAl surpasses the instantaneous-only baseline in 35 environments out of 49. The source code of the method and of all the experiments is available at https://github.com/htdt/tempal.

Temporal Alignment for History Representation in Reinforcement Learning / Ermolov, Aleksandr; Sangineto, Enver; Sebe, Nicu. - 2022-:(2022), pp. 2172-2178. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 21-25 August 2022) [10.1109/ICPR56361.2022.9956553].

Temporal Alignment for History Representation in Reinforcement Learning

Ermolov, Aleksandr;Sangineto, Enver;Sebe, Nicu
2022-01-01

Abstract

Environments in Reinforcement Learning are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about the past. However, providing complete observations of numerous steps can be excessive. Inspired by human memory, we propose to represent history with only important changes in the environment and, in our approach, to obtain automatically this representation using self-supervision. Our method (TempAl) aligns temporally-close frames, revealing a general, slowly varying state of the environment. This procedure is based on contrastive loss, which pulls embeddings of nearby observations to each other while pushing away other samples from the batch. It can be interpreted as a metric that captures the temporal relations of observations. We propose to combine both common instantaneous and our history representation and we evaluate TempAl on all available Atari games from the Arcade Learning Environment. TempAl surpasses the instantaneous-only baseline in 35 environments out of 49. The source code of the method and of all the experiments is available at https://github.com/htdt/tempal.
2022
International Conference on Pattern Recognition
345 E 47TH ST, NEW YORK, NY 10017 USA
IEEE
978-1-6654-9062-7
Ermolov, Aleksandr; Sangineto, Enver; Sebe, Nicu
Temporal Alignment for History Representation in Reinforcement Learning / Ermolov, Aleksandr; Sangineto, Enver; Sebe, Nicu. - 2022-:(2022), pp. 2172-2178. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 21-25 August 2022) [10.1109/ICPR56361.2022.9956553].
File in questo prodotto:
File Dimensione Formato  
Temporal_Alignment_for_History_Representation_in_Reinforcement_Learning.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.49 MB
Formato Adobe PDF
3.49 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361306
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact