Manipulating visual attributes of images through human-written text is a very challenging task. On the one hand, models have to learn the manipulation without the ground truth of the desired output. On the other hand, models have to deal with the inherent ambiguity of natural language. Previous research usually requires either the user to describe all the characteristics of the desired image or to use richly-annotated image captioning datasets. In this work, we propose a novel unsupervised approach, based on image-to-image translation, that alters the attributes of a given image through a command-like sentence such as "change the hair color to black". Contrarily to state-of-the-art approaches, our model does not require a human-annotated dataset nor a textual description of all the attributes of the desired image, but only those that have to be modified. Our proposed model disentangles the image content from the visual attributes, and it learns to modify the latter using the textual description, before generating a new image from the content and the modified attribute representation. Because text might be inherently ambiguous (blond hair may refer to different shadows of blond, e.g. golden, icy, sandy), our method generates multiple stochastic versions of the same translation. Experiments show that the proposed model achieves promising performances on two large-scale public datasets: CelebA and CUB. We believe our approach will pave the way to new avenues of research combining textual and speech commands with visual attributes.

Describe What to Change: A Text-guided Unsupervised Image-to-image Translation Approach / Liu, Yahui; De Nadai, Marco; Cai, Deng; Li, Huayang; Alameda-Pineda, Xavier; Sebe, Nicu; Lepri, Bruno. - (2020), pp. 1357-1365. (Intervento presentato al convegno 28th ACM International Conference on Multimedia, MM 2020 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413505].

Describe What to Change: A Text-guided Unsupervised Image-to-image Translation Approach

Liu, Yahui;De Nadai, Marco;Sebe, Nicu;Lepri, Bruno
2020-01-01

Abstract

Manipulating visual attributes of images through human-written text is a very challenging task. On the one hand, models have to learn the manipulation without the ground truth of the desired output. On the other hand, models have to deal with the inherent ambiguity of natural language. Previous research usually requires either the user to describe all the characteristics of the desired image or to use richly-annotated image captioning datasets. In this work, we propose a novel unsupervised approach, based on image-to-image translation, that alters the attributes of a given image through a command-like sentence such as "change the hair color to black". Contrarily to state-of-the-art approaches, our model does not require a human-annotated dataset nor a textual description of all the attributes of the desired image, but only those that have to be modified. Our proposed model disentangles the image content from the visual attributes, and it learns to modify the latter using the textual description, before generating a new image from the content and the modified attribute representation. Because text might be inherently ambiguous (blond hair may refer to different shadows of blond, e.g. golden, icy, sandy), our method generates multiple stochastic versions of the same translation. Experiments show that the proposed model achieves promising performances on two large-scale public datasets: CelebA and CUB. We believe our approach will pave the way to new avenues of research combining textual and speech commands with visual attributes.
2020
Proceedings of the 28th ACM International Conference on Multimedia
1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
Association for Computing Machinery, Inc
9781450379885
Liu, Yahui; De Nadai, Marco; Cai, Deng; Li, Huayang; Alameda-Pineda, Xavier; Sebe, Nicu; Lepri, Bruno
Describe What to Change: A Text-guided Unsupervised Image-to-image Translation Approach / Liu, Yahui; De Nadai, Marco; Cai, Deng; Li, Huayang; Alameda-Pineda, Xavier; Sebe, Nicu; Lepri, Bruno. - (2020), pp. 1357-1365. (Intervento presentato al convegno 28th ACM International Conference on Multimedia, MM 2020 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413505].
File in questo prodotto:
File Dimensione Formato  
3394171.3413505.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.43 MB
Formato Adobe PDF
4.43 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/284442
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 41
  • ???jsp.display-item.citation.isi??? 36
  • OpenAlex ND
social impact