We introduce language-driven image generation, the task of generating an image visualizing the semantic contents of a word embedding, e.g., given the word embedding of grasshopper, we generate a natural image of a grasshopper. We implement a simple method based on two mapping functions. The first takes as input a word embedding (as produced, e.g., by the word2vec toolkit) and maps it onto a high-level visual space (e.g., the space defined by one of the top layers of a Convolutional Neural Network). The second function maps this abstract visual representation to pixel space, in order to generate the target image. Several user studies suggest that the current system produces images that capture general visual properties of the concepts encoded in the word embedding, such as color or typical environment, and are sufficient to discriminate between general categories of objects.
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
Titolo: | Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation |
Autori: | Angeliki, Lazaridou; Dat, Tien Nguyen; Raffaella, Bernardi; Marco, Baroni |
Autori Unitn: | |
Anno di pubblicazione: | 2015 |
Titolo del volume contenente il saggio: | NIPS Multimodal Machine Learning Workshop |
Handle: | http://hdl.handle.net/11572/136673 |
Appare nelle tipologie: | 04.3 Poster presentato a convegno (Poster presented at Conference or Workshop) |