Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic interpolation results. However, state-of-the-art models frequently show abrupt changes in the image appearance during interpolation, and usually perform poorly in interpolations across domains. In this paper, we propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space in which: 1) Both intra- and inter-domain interpolations correspond to gradual changes in the generated images and 2) The content of the source image is better preserved during the translation. Moreover, we propose a novel evaluation metric to properly measure the smoothness of latent style space of I2I translation models. The proposed method can be plugged in existing translation approaches, and our extensive experiments on different datasets show that it can significantly boost the quality of the generated images and the graduality of the interpolations.

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation / Liu, Yahui; Sangineto, Enver; Chen, Yajing; Bao, Linchao; Zhang, Haoxian; Sebe, Nicu; Lepri, Bruno; Wang, Wei; Nadai, Marco De. - (2021), pp. 10780-10789. (Intervento presentato al convegno IEEE/CVF Conference on Computer Vision and Pattern Recognition tenutosi a online nel 2021) [10.1109/CVPR46437.2021.01064].

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation

Liu, Yahui;Sangineto, Enver;Sebe, Nicu;Lepri, Bruno;Wang, Wei;Nadai, Marco De
2021-01-01

Abstract

Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic interpolation results. However, state-of-the-art models frequently show abrupt changes in the image appearance during interpolation, and usually perform poorly in interpolations across domains. In this paper, we propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space in which: 1) Both intra- and inter-domain interpolations correspond to gradual changes in the generated images and 2) The content of the source image is better preserved during the translation. Moreover, we propose a novel evaluation metric to properly measure the smoothness of latent style space of I2I translation models. The proposed method can be plugged in existing translation approaches, and our extensive experiments on different datasets show that it can significantly boost the quality of the generated images and the graduality of the interpolations.
2021
IEEE/CVF Conference on Computer Vision and Pattern Recognition
Piscataway, NJ USA
IEEE
978-1-6654-4509-2
Liu, Yahui; Sangineto, Enver; Chen, Yajing; Bao, Linchao; Zhang, Haoxian; Sebe, Nicu; Lepri, Bruno; Wang, Wei; Nadai, Marco De
Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation / Liu, Yahui; Sangineto, Enver; Chen, Yajing; Bao, Linchao; Zhang, Haoxian; Sebe, Nicu; Lepri, Bruno; Wang, Wei; Nadai, Marco De. - (2021), pp. 10780-10789. (Intervento presentato al convegno IEEE/CVF Conference on Computer Vision and Pattern Recognition tenutosi a online nel 2021) [10.1109/CVPR46437.2021.01064].
File in questo prodotto:
File Dimensione Formato  
Liu_Smoothing_the_Disentangled_Latent_Style_Space_for_Unsupervised_Image-to-Image_Translation_CVPR_2021_paper.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.1 MB
Formato Adobe PDF
7.1 MB Adobe PDF Visualizza/Apri
Smoothing_the_Disentangled_Latent_Style_Space_for_Unsupervised_Image-to-Image_Translation.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.43 MB
Formato Adobe PDF
6.43 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/326194
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 26
  • ???jsp.display-item.citation.isi??? 19
social impact