Visual Transformers (VTs) are emerging as an architectural paradigm alternative to Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations between image elements and they potentially have a larger representation capacity. However, the lack of the typical convolutional inductive bias makes these models more data hungry than common CNNs. In fact, some local properties of the visual domain which are embedded in the CNN architectural design, in VTs should be learned from samples. In this paper, we empirically analyse different VTs, comparing their robustness in a small training set regime, and we show that, despite having a comparable accuracy when trained on ImageNet, their performance on smaller datasets can be largely different. Moreover, we propose an auxiliary self-supervised task which can extract additional information from images with only a negligible computational overhead. This task encourages the VTs to learn spatial relations within an image and makes the VT training much more robust when training data is scarce. Our task is used jointly with the standard (supervised) training and it does not depend on specific architectural choices, thus it can be easily plugged in the existing VTs. Using an extensive evaluation with different VTs and datasets, we show that our method can improve (sometimes dramatically) the final accuracy of the VTs. Our code is available at: https://github.com/yhlleo/VTs-Drloc.

Efficient Training of Visual Transformers with Small-Size Datasets / Liu, Yahui; Sangineto, Enver; Bi, Wei; Sebe, Nicu; Lepri, Bruno; De Nadai, Marco. - (2021). (Intervento presentato al convegno NeurIPS 2021 tenutosi a online nel 6th-14th December 2021).

Efficient Training of Visual Transformers with Small-Size Datasets

Liu, Yahui;Sangineto, Enver;Sebe, Nicu;De Nadai, Marco
2021-01-01

Abstract

Visual Transformers (VTs) are emerging as an architectural paradigm alternative to Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations between image elements and they potentially have a larger representation capacity. However, the lack of the typical convolutional inductive bias makes these models more data hungry than common CNNs. In fact, some local properties of the visual domain which are embedded in the CNN architectural design, in VTs should be learned from samples. In this paper, we empirically analyse different VTs, comparing their robustness in a small training set regime, and we show that, despite having a comparable accuracy when trained on ImageNet, their performance on smaller datasets can be largely different. Moreover, we propose an auxiliary self-supervised task which can extract additional information from images with only a negligible computational overhead. This task encourages the VTs to learn spatial relations within an image and makes the VT training much more robust when training data is scarce. Our task is used jointly with the standard (supervised) training and it does not depend on specific architectural choices, thus it can be easily plugged in the existing VTs. Using an extensive evaluation with different VTs and datasets, we show that our method can improve (sometimes dramatically) the final accuracy of the VTs. Our code is available at: https://github.com/yhlleo/VTs-Drloc.
2021
Advances in Neural Information Processing Systems 34
San Diego
NeurIPS
9781713845393
Liu, Yahui; Sangineto, Enver; Bi, Wei; Sebe, Nicu; Lepri, Bruno; De Nadai, Marco
Efficient Training of Visual Transformers with Small-Size Datasets / Liu, Yahui; Sangineto, Enver; Bi, Wei; Sebe, Nicu; Lepri, Bruno; De Nadai, Marco. - (2021). (Intervento presentato al convegno NeurIPS 2021 tenutosi a online nel 6th-14th December 2021).
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2021-efficient-training-of-visual-transformers-with-small-datasets-Paper.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.79 MB
Formato Adobe PDF
4.79 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/328669
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact