State-of-the-art rehearsal-free continual learning methods exploit the peculiarities of Vision Transformers to learn task-specific prompts, drastically reducing catastrophic forgetting. However, there is a tradeoff between the number of learned parameters and the performance, making such models computationally expensive. In this work, we aim to reduce this cost while maintaining competitive performance. We achieve this by revisiting and extending a simple transfer learning idea: learning task-specific normalization layers. Specifically, we tune the scale and bias parameters of LayerNorm for each continual learning task, selecting them at inference time based on the similarity between task-specific keys and the output of the pre-trained model. To make the classifier robust to incorrect selection of parameters during inference, we introduce a two-stage training procedure, where we first optimize the task-specific parameters and then train the classifier with the same selection procedure of the inference time. Experiments on ImageNet-R and CIFAR-100 show that our method achieves results that are either superior or on par with the state of the art while being computationally cheaper.

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers / De Min, Thomas.; Mancini, Massimiliano; Alahari, Karteek; Alameda-Pineda, Xavier.; Ricci, Elisa. - (2023), pp. 3577-3586. (Intervento presentato al convegno ICCV-WS tenutosi a Parigi, Francia nel 2nd - 6th October 2023) [10.1109/ICCVW60793.2023.00385].

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers

De Min, Thomas.
;
Mancini, Massimiliano;Alameda-Pineda, Xavier.;Ricci, Elisa
2023-01-01

Abstract

State-of-the-art rehearsal-free continual learning methods exploit the peculiarities of Vision Transformers to learn task-specific prompts, drastically reducing catastrophic forgetting. However, there is a tradeoff between the number of learned parameters and the performance, making such models computationally expensive. In this work, we aim to reduce this cost while maintaining competitive performance. We achieve this by revisiting and extending a simple transfer learning idea: learning task-specific normalization layers. Specifically, we tune the scale and bias parameters of LayerNorm for each continual learning task, selecting them at inference time based on the similarity between task-specific keys and the output of the pre-trained model. To make the classifier robust to incorrect selection of parameters during inference, we introduce a two-stage training procedure, where we first optimize the task-specific parameters and then train the classifier with the same selection procedure of the inference time. Experiments on ImageNet-R and CIFAR-100 show that our method achieves results that are either superior or on par with the state of the art while being computationally cheaper.
2023
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
Piscataway, NJ USA
IEEE Computer Society
979-8-3503-0744-3
979-8-3503-0745-0
De Min, Thomas.; Mancini, Massimiliano; Alahari, Karteek; Alameda-Pineda, Xavier.; Ricci, Elisa
On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers / De Min, Thomas.; Mancini, Massimiliano; Alahari, Karteek; Alameda-Pineda, Xavier.; Ricci, Elisa. - (2023), pp. 3577-3586. (Intervento presentato al convegno ICCV-WS tenutosi a Parigi, Francia nel 2nd - 6th October 2023) [10.1109/ICCVW60793.2023.00385].
File in questo prodotto:
File Dimensione Formato  
2308.09610.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 545.71 kB
Formato Adobe PDF
545.71 kB Adobe PDF Visualizza/Apri
On_the_Effectiveness_of_LayerNorm_Tuning_for_Continual_Learning_in_Vision_Transformers.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400789
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact