Over the last few years, Unsupervised Domain Adaptation (UDA) techniques have acquired remarkable importance and popularity in computer vision. However, when compared to the extensive literature available for images, the field of videos is still relatively unexplored. On the other hand, the performance of a model in action recognition is heavily affected by domain shift. In this paper, we propose a simple and novel UDA approach for video action recognition. Our approach leverages recent advances on spatio-temporal transformers to build a robust source model that better generalises to the target domain. Furthermore, our architecture learns domain invariant features thanks to the introduction of a novel alignment loss term derived from the Information Bottleneck principle. We report results on two video action recognition benchmarks for UDA, showing state-of-the-art performance on HMDB ↔ UCF, as well as on Kinetics→NEC-Drone, which is more challenging. This demonstrates the effectiveness of our method in handling different levels of domain shift. The source code is available at https://github.com/vturrisi/UDAVT.

Unsupervised Domain Adaptation for Video Transformers in Action Recognition / da Costa, Victor G. Turrisi; Zara, Giacomo; Rota, Paolo; Oliveira-Santos, Thiago; Sebe, Nicu; Murino, Vittorio; Ricci, Elisa. - 2022-:(2022), pp. 1258-1265. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 2022) [10.1109/ICPR56361.2022.9956679].

Unsupervised Domain Adaptation for Video Transformers in Action Recognition

Zara, Giacomo;Rota, Paolo;Oliveira-Santos, Thiago;Sebe, Nicu;Ricci, Elisa
2022-01-01

Abstract

Over the last few years, Unsupervised Domain Adaptation (UDA) techniques have acquired remarkable importance and popularity in computer vision. However, when compared to the extensive literature available for images, the field of videos is still relatively unexplored. On the other hand, the performance of a model in action recognition is heavily affected by domain shift. In this paper, we propose a simple and novel UDA approach for video action recognition. Our approach leverages recent advances on spatio-temporal transformers to build a robust source model that better generalises to the target domain. Furthermore, our architecture learns domain invariant features thanks to the introduction of a novel alignment loss term derived from the Information Bottleneck principle. We report results on two video action recognition benchmarks for UDA, showing state-of-the-art performance on HMDB ↔ UCF, as well as on Kinetics→NEC-Drone, which is more challenging. This demonstrates the effectiveness of our method in handling different levels of domain shift. The source code is available at https://github.com/vturrisi/UDAVT.
2022
International Conference on Pattern Recognition
345 E 47TH ST, NEW YORK, NY 10017 USA
IEEE
978-1-6654-9062-7
da Costa, Victor G. Turrisi; Zara, Giacomo; Rota, Paolo; Oliveira-Santos, Thiago; Sebe, Nicu; Murino, Vittorio; Ricci, Elisa
Unsupervised Domain Adaptation for Video Transformers in Action Recognition / da Costa, Victor G. Turrisi; Zara, Giacomo; Rota, Paolo; Oliveira-Santos, Thiago; Sebe, Nicu; Murino, Vittorio; Ricci, Elisa. - 2022-:(2022), pp. 1258-1265. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 2022) [10.1109/ICPR56361.2022.9956679].
File in questo prodotto:
File Dimensione Formato  
Unsupervised_Domain_Adaptation_for_Video_Transformers_in_Action_Recognition.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.52 MB
Formato Adobe PDF
2.52 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361305
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact