We propose a unified Generative Adversarial Network (GAN) for controllable image-to-image translation, i.e., transferring an image from a source to a target domain guided by controllable structures. In addition to conditioning on a reference image, we show how the model can generate images conditioned on controllable structures, e.g., class labels, object keypoints, human skeletons, and scene semantic maps. The proposed model consists of a single generator and a discriminator taking a conditional image and the target controllable structure as input. In this way, the conditional image can provide appearance information and the controllable structure can provide the structure information for generating the target result. Moreover, our model learns the image-to-image mapping through three novel losses, i.e., color loss, controllable structure guided cycle-consistency loss, and controllable structure guided self-content preserving loss. Also, we present the Fréchet ResNet Distance (FRD) to evaluate the quality of the generated images. Experiments on two challenging image translation tasks, i.e., hand gesture-to-gesture translation and cross-view image translation, show that our model generates convincing results, and significantly outperforms other state-of-the-art methods on both tasks. Meanwhile, the proposed framework is a unified solution, thus it can be applied to solving other controllable structure guided image translation tasks such as landmark guided facial expression translation and keypoint guided person image generation. To the best of our knowledge, we are the first to make one GAN framework work on all such controllable structure guided image translation tasks. Code is available at https://github.com/Ha0Tang/GestureGAN.
Unified Generative Adversarial Networks for Controllable Image-to-Image Translation / Tang, H.; Liu, H.; Sebe, N.. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1057-7149. - 29(2020), pp. 8916-8929.
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
|Titolo:||Unified Generative Adversarial Networks for Controllable Image-to-Image Translation|
|Autori:||Tang, H.; Liu, H.; Sebe, N.|
|Titolo del periodico:||IEEE TRANSACTIONS ON IMAGE PROCESSING|
|Anno di pubblicazione:||2020|
|Codice identificativo Scopus:||2-s2.0-85091828141|
|Codice identificativo Pubmed:||32915739|
|Codice identificativo WOS:||WOS:000571733900003|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1109/TIP.2020.3021789|
|Citazione:||Unified Generative Adversarial Networks for Controllable Image-to-Image Translation / Tang, H.; Liu, H.; Sebe, N.. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1057-7149. - 29(2020), pp. 8916-8929.|
|Appare nelle tipologie:|