Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available by leveraging information from annotated data in a source domain. Most deep UDA approaches operate in a single-source, single-target scenario, i.e. they assume that the source and the target samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases, exploiting traditional single-source, single-target methods for learning classification models may lead to poor results. Furthermore, it is often difficult to provide the domain labels for all data points, i.e. latent domains should be automatically discovered. This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets and exploiting this information to learn robust target classifiers. Specifically, our architecture is based on two main components, i.e. a side branch that automatically computes the assignment of each sample to its latent domain and novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.

Inferring Latent Domains for Unsupervised Deep Domain Adaptation / Mancini, Massimiliano; Porzi, Lorenzo; Bulo, Samuel Rota; Caputo, Barbara; Ricci, Elisa. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 43:2(2021), pp. 485-498. [10.1109/TPAMI.2019.2933829]

Inferring Latent Domains for Unsupervised Deep Domain Adaptation

Mancini, Massimiliano;Ricci, Elisa
2021-01-01

Abstract

Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available by leveraging information from annotated data in a source domain. Most deep UDA approaches operate in a single-source, single-target scenario, i.e. they assume that the source and the target samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases, exploiting traditional single-source, single-target methods for learning classification models may lead to poor results. Furthermore, it is often difficult to provide the domain labels for all data points, i.e. latent domains should be automatically discovered. This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets and exploiting this information to learn robust target classifiers. Specifically, our architecture is based on two main components, i.e. a side branch that automatically computes the assignment of each sample to its latent domain and novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
2021
2
Mancini, Massimiliano; Porzi, Lorenzo; Bulo, Samuel Rota; Caputo, Barbara; Ricci, Elisa
Inferring Latent Domains for Unsupervised Deep Domain Adaptation / Mancini, Massimiliano; Porzi, Lorenzo; Bulo, Samuel Rota; Caputo, Barbara; Ricci, Elisa. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 43:2(2021), pp. 485-498. [10.1109/TPAMI.2019.2933829]
File in questo prodotto:
File Dimensione Formato  
08792192.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.48 MB
Formato Adobe PDF
4.48 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/285525
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 18
  • ???jsp.display-item.citation.isi??? 16
  • OpenAlex ND
social impact