We address the problem of unsupervised gaze correction in the wild, presenting a solution that works without the need of precise annotations of the gaze angle and the head pose. We created a new dataset called CelebAGaze consisting of two domains X, Y, where the eyes are either staring at the camera or somewhere else. Our method consists of three novel modules: the Gaze Correction module(GCM), the Gaze Animation module(GAM), and the Pretrained Autoencoder module (PAM). Specifically, GCM and GAM separately train a dual in-painting network using data from the domain X for gaze correction and data from the domain Y for gaze animation. Additionally, a Synthesis-As-Training method is proposed when training GAM to encourage the features encoded from the eye region to be correlated with the angle information, resulting in gaze animation achieved by interpolation in the latent space. To further preserve the identity information e.g., eye shape, iris color, we propose the PAM with an Autoencoder, which is based on Self-Supervised mirror learning where the bottleneck features are angle-invariant and which works as an extra input to the dual in-painting models. Extensive experiments validate the effectiveness of the proposed method for gaze correction and gaze animation in the wild and demonstrate the superiority of our approach in producing more compelling results than state-of-the-art baselines. Our code, the pretrained models and supplementary results are available at:https://github.com/zhangqianhui/GazeAnimation.

Dual In-painting Model for Unsupervised Gaze Correction and Animation in the Wild / Zhang, Jichao; Chen, Jingjing; Tang, Hao; Wang, Wei; Yan, Yan; Sangineto, Enver; Sebe, Nicu. - (2020), pp. 1588-1596. (Intervento presentato al convegno 28th ACM International Conference on Multimedia, MM 2020 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413981].

Dual In-painting Model for Unsupervised Gaze Correction and Animation in the Wild

Zhang, Jichao;Tang, Hao;Wang, Wei;Yan, Yan;Sangineto, Enver;Sebe, Nicu
2020-01-01

Abstract

We address the problem of unsupervised gaze correction in the wild, presenting a solution that works without the need of precise annotations of the gaze angle and the head pose. We created a new dataset called CelebAGaze consisting of two domains X, Y, where the eyes are either staring at the camera or somewhere else. Our method consists of three novel modules: the Gaze Correction module(GCM), the Gaze Animation module(GAM), and the Pretrained Autoencoder module (PAM). Specifically, GCM and GAM separately train a dual in-painting network using data from the domain X for gaze correction and data from the domain Y for gaze animation. Additionally, a Synthesis-As-Training method is proposed when training GAM to encourage the features encoded from the eye region to be correlated with the angle information, resulting in gaze animation achieved by interpolation in the latent space. To further preserve the identity information e.g., eye shape, iris color, we propose the PAM with an Autoencoder, which is based on Self-Supervised mirror learning where the bottleneck features are angle-invariant and which works as an extra input to the dual in-painting models. Extensive experiments validate the effectiveness of the proposed method for gaze correction and gaze animation in the wild and demonstrate the superiority of our approach in producing more compelling results than state-of-the-art baselines. Our code, the pretrained models and supplementary results are available at:https://github.com/zhangqianhui/GazeAnimation.
2020
Proceedings of the 28th ACM International Conference on Multimedia
1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
Association for Computing Machinery, Inc
9781450379885
Zhang, Jichao; Chen, Jingjing; Tang, Hao; Wang, Wei; Yan, Yan; Sangineto, Enver; Sebe, Nicu
Dual In-painting Model for Unsupervised Gaze Correction and Animation in the Wild / Zhang, Jichao; Chen, Jingjing; Tang, Hao; Wang, Wei; Yan, Yan; Sangineto, Enver; Sebe, Nicu. - (2020), pp. 1588-1596. (Intervento presentato al convegno 28th ACM International Conference on Multimedia, MM 2020 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413981].
File in questo prodotto:
File Dimensione Formato  
3394171.3413981.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.44 MB
Formato Adobe PDF
6.44 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/284586
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact