One of the main challenges for developing visual recognition systems working in the wild is to devise computational models immune from the domain shift problem, i.e. accurate when test data are drawn from a (slightly) different data distribution than training samples. In the last decade, several research efforts have been devoted to devise algorithmic solutions for this issue. Recent attempts to mitigate domain shift have resulted into deep learning models for domain adaptation which learn domain-invariant representations by introducing appropriate loss terms, by casting the problem within an adversarial learning framework or by embedding into deep network specific domain normalization layers. This paper describes a novel approach for unsupervised domain adaptation. Similarly to previous works we propose to align the learned representations by embedding them into appropriate network feature normalization layers. Opposite to previous works, our Domain Alignment Layers are designed not only to match the source and target feature distributions but also to automatically learn the degree of feature alignment required at different levels of the deep network. Differently from most previous deep domain adaptation methods, our approach is able to operate in a multi-source setting. Thorough experiments on four publicly available benchmarks confirm the effectiveness of our approach.

MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation / Carlucci, Fabio Maria; Porzi, Lorenzo; Caputo, Barbara; Ricci, Elisa; Rota Bulo, Samuel. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 43:12(2021), pp. 4441-4452. [10.1109/TPAMI.2020.3001338]

MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation

Ricci, Elisa;
2021-01-01

Abstract

One of the main challenges for developing visual recognition systems working in the wild is to devise computational models immune from the domain shift problem, i.e. accurate when test data are drawn from a (slightly) different data distribution than training samples. In the last decade, several research efforts have been devoted to devise algorithmic solutions for this issue. Recent attempts to mitigate domain shift have resulted into deep learning models for domain adaptation which learn domain-invariant representations by introducing appropriate loss terms, by casting the problem within an adversarial learning framework or by embedding into deep network specific domain normalization layers. This paper describes a novel approach for unsupervised domain adaptation. Similarly to previous works we propose to align the learned representations by embedding them into appropriate network feature normalization layers. Opposite to previous works, our Domain Alignment Layers are designed not only to match the source and target feature distributions but also to automatically learn the degree of feature alignment required at different levels of the deep network. Differently from most previous deep domain adaptation methods, our approach is able to operate in a multi-source setting. Thorough experiments on four publicly available benchmarks confirm the effectiveness of our approach.
2021
12
Carlucci, Fabio Maria; Porzi, Lorenzo; Caputo, Barbara; Ricci, Elisa; Rota Bulo, Samuel
MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation / Carlucci, Fabio Maria; Porzi, Lorenzo; Caputo, Barbara; Ricci, Elisa; Rota Bulo, Samuel. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 43:12(2021), pp. 4441-4452. [10.1109/TPAMI.2020.3001338]
File in questo prodotto:
File Dimensione Formato  
09113488.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.21 MB
Formato Adobe PDF
7.21 MB Adobe PDF   Visualizza/Apri
MultiDIAL_Domain_Alignment_Layers_for_Multisource_Unsupervised_Domain_Adaptation.pdf

Solo gestori archivio

Descrizione: versione finale
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.71 MB
Formato Adobe PDF
1.71 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/285521
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 14
  • OpenAlex ND
social impact