While several research studies have focused on analyzing human behavior and, in particular, emotional signals from visual data, the problem of synthesizing face video sequences with specific attributes (e.g. age, facial expressions) received much less attention. This paper proposes a novel deep generative model able to produce face videos from a given image of a neutral face and a label indicating a specific facial expression, e.g. spontaneous smile. Our framework consists of two main building blocks: an image generator and a frame sequence generator. The image generator is implemented as a deep neural model which combines generative adversarial networks and variational auto-encoders, while the sequence generator is a label-conditioned recurrent neural network. In the proposed framework, given as input a neural face and a label, the sequence generator outputs a set of hidden representations with smooth transitions corresponding to video frames. Then, the image generator is used to decode the hidden representations into the actual face images. To impose that the net generates videos consistent with the given label, a novel identity adversarial loss is proposed. Our experimental results demonstrate the effectiveness of the framework and the advantage of introducing an adversarial component into recurrent models for face video generation.
Learning How to Smile: Expression Video Generation with Conditional Adversarial Recurrent Nets / Wang, W.; Alameda-Pineda, X.; Xu, D.; Ricci, E.; Sebe, N.. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - 2020(2020).
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
Titolo: | Learning How to Smile: Expression Video Generation with Conditional Adversarial Recurrent Nets | |
Autori: | Wang, W.; Alameda-Pineda, X.; Xu, D.; Ricci, E.; Sebe, N. | |
Autori Unitn: | ||
Titolo del periodico: | IEEE TRANSACTIONS ON MULTIMEDIA | |
Anno di pubblicazione: | 2020 | |
Codice identificativo Scopus: | 2-s2.0-85077386512 | |
Codice identificativo WOS: | WOS:000584239900005 | |
Digital Object Identifier (DOI): | http://dx.doi.org/10.1109/TMM.2019.2963621 | |
Handle: | http://hdl.handle.net/11572/251274 | |
Citazione: | Learning How to Smile: Expression Video Generation with Conditional Adversarial Recurrent Nets / Wang, W.; Alameda-Pineda, X.; Xu, D.; Ricci, E.; Sebe, N.. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - 2020(2020). | |
Appare nelle tipologie: |