In recent years, great advances in Domain Adaptation (DA) have been possible through deep neural networks. While this is true even for multi-source scenarios, most of the methods are based on the assumption that the domain to which each sample belongs is known a priori. However, in practice, we might have a source domain composed by a mixture of multiple sub-domains, without any prior about the sub-domain to which each source sample belongs. In this case, while multi-source DA methods are not applicable, restoring to single-source ones may lead to sub-optimal results. In this work, we explore a recent direction in deep domain adaptation: automatically discovering latent domains in visual datasets. Previous works address this problem by using a domain prediction branch, trained with an entropy loss. Here we present a novel formulation for training the domain prediction branch which exploits (i) domain prediction output for various perturbations of the input features and (ii) the min-entropy consensus loss, which forces the predictions of the perturbation to be both consistent and with low entropy. We compare our approach to the previous state-of-the-art on publicly-available datasets, showing the effectiveness of our method both quantitatively and qualitatively.

Discovering Latent Domains for Unsupervised Domain Adaptation Through Consistency / Mancini, Massimiliano; Porzi, Lorenzo; Cermelli, Fabio; Caputo, Barbara. - 11752:(2019), pp. 390-401. (Intervento presentato al convegno 20th International Conference on Image Analysis and Processing, ICIAP 2019 tenutosi a Trento; Italy nel 9-13 Settembre 2019) [10.1007/978-3-030-30645-8_36].

Discovering Latent Domains for Unsupervised Domain Adaptation Through Consistency

Massimiliano Mancini;
2019-01-01

Abstract

In recent years, great advances in Domain Adaptation (DA) have been possible through deep neural networks. While this is true even for multi-source scenarios, most of the methods are based on the assumption that the domain to which each sample belongs is known a priori. However, in practice, we might have a source domain composed by a mixture of multiple sub-domains, without any prior about the sub-domain to which each source sample belongs. In this case, while multi-source DA methods are not applicable, restoring to single-source ones may lead to sub-optimal results. In this work, we explore a recent direction in deep domain adaptation: automatically discovering latent domains in visual datasets. Previous works address this problem by using a domain prediction branch, trained with an entropy loss. Here we present a novel formulation for training the domain prediction branch which exploits (i) domain prediction output for various perturbations of the input features and (ii) the min-entropy consensus loss, which forces the predictions of the perturbation to be both consistent and with low entropy. We compare our approach to the previous state-of-the-art on publicly-available datasets, showing the effectiveness of our method both quantitatively and qualitatively.
2019
Image Analysis and Processing – ICIAP 2019
Cham, Switzerland
Springer International Publishing
978-3-030-30645-8
Mancini, Massimiliano; Porzi, Lorenzo; Cermelli, Fabio; Caputo, Barbara
Discovering Latent Domains for Unsupervised Domain Adaptation Through Consistency / Mancini, Massimiliano; Porzi, Lorenzo; Cermelli, Fabio; Caputo, Barbara. - 11752:(2019), pp. 390-401. (Intervento presentato al convegno 20th International Conference on Image Analysis and Processing, ICIAP 2019 tenutosi a Trento; Italy nel 9-13 Settembre 2019) [10.1007/978-3-030-30645-8_36].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/385029
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact