Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance (“positives”) are contrasted with instances extracted from other images (“negatives”). For the learning to be effective, many negatives should be compared with a positive pair, which is computationally demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latentspace features. The whitening operation has a “scattering” effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/ self-supervised

Whitening for Self-Supervised Representation Learning / Ermolov, A.; Siarohin, A.; Sangineto, E.; Sebe, N.. - (2021). (Intervento presentato al convegno International Conference on Machine Learning (ICML’21) tenutosi a online nel 18th-24th July 2021).

Whitening for Self-Supervised Representation Learning

A. Ermolov;A. Siarohin;E. Sangineto;N. Sebe
2021-01-01

Abstract

Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance (“positives”) are contrasted with instances extracted from other images (“negatives”). For the learning to be effective, many negatives should be compared with a positive pair, which is computationally demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latentspace features. The whitening operation has a “scattering” effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/ self-supervised
2021
International Conference on Machine Learning (ICML’21)
Red Hook, NY, USA
Curran Associates
9781713845065
Ermolov, A.; Siarohin, A.; Sangineto, E.; Sebe, N.
Whitening for Self-Supervised Representation Learning / Ermolov, A.; Siarohin, A.; Sangineto, E.; Sebe, N.. - (2021). (Intervento presentato al convegno International Conference on Machine Learning (ICML’21) tenutosi a online nel 18th-24th July 2021).
File in questo prodotto:
File Dimensione Formato  
ermolov21a.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.5 MB
Formato Adobe PDF
1.5 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/326196
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 75
  • ???jsp.display-item.citation.isi??? 1
social impact