Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.

Retrieval Guided Unsupervised Multi-domain Image to Image Translation / Gomez, Raul; Liu, Yahui; De Nadai, Marco; Karatzas, Dimosthenis; Lepri, Bruno; Sebe, Nicu. - (2020), pp. 3164-3172. ((Intervento presentato al convegno ACM MM ‘20 tenutosi a online (Seattle, United States) nel 12th-16th October 2020 [10.1145/3394171.3413785].

Retrieval Guided Unsupervised Multi-domain Image to Image Translation

Liu, Yahui;De Nadai, Marco;Lepri, Bruno;Sebe, Nicu
2020-01-01

Abstract

Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.
Proceedings of the 28th ACM International Conference on Multimedia
New York
ACM
9781450379885
Gomez, Raul; Liu, Yahui; De Nadai, Marco; Karatzas, Dimosthenis; Lepri, Bruno; Sebe, Nicu
Retrieval Guided Unsupervised Multi-domain Image to Image Translation / Gomez, Raul; Liu, Yahui; De Nadai, Marco; Karatzas, Dimosthenis; Lepri, Bruno; Sebe, Nicu. - (2020), pp. 3164-3172. ((Intervento presentato al convegno ACM MM ‘20 tenutosi a online (Seattle, United States) nel 12th-16th October 2020 [10.1145/3394171.3413785].
File in questo prodotto:
File Dimensione Formato  
3394171.3413785.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.84 MB
Formato Adobe PDF
2.84 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/284584
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact