Generative Neural Radiance Field (GNeRF) models, which extract implicit 3D representations from 2D images, have recently been shown to produce realistic images representing rigid/semi-rigid objects, such as human faces or cars. However, they usually struggle to generate high-quality images representing non-rigid objects, such as the human body, which is of a great interest for many computer graphics applications. This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a GNeRF with a texture generator. The former learns an implicit 3D representation of the human body and outputs a set of 2D semantic segmentation masks. The latter transforms these semantic masks into a real image, adding a realistic texture to the human appearance. Without requiring additional 3D information, our model can learn 3D human representations with a photo-realistic, controllable generation. Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines. The code is available at https://github.com/zhangqianhui/3DSGAN.

3D-Aware Semantic-Guided Generative Model for Human Synthesis / Zhang, Jichao; Sangineto, Enver; Tang, Hao; Siarohin, Aliaksandr; Zhong, Zhun; Sebe, Nicu; Wang, Wei. - 13675:(2022), pp. 339-356. (Intervento presentato al convegno 17th European Conference on Computer Vision, ECCV 2022 tenutosi a Tel AvivI, Israel nel 23–27 October 2022) [10.1007/978-3-031-19784-0_20].

3D-Aware Semantic-Guided Generative Model for Human Synthesis

Zhang, Jichao
;
Sangineto, Enver;Tang, Hao;Siarohin, Aliaksandr;Zhong, Zhun;Sebe, Nicu;Wang, Wei
2022-01-01

Abstract

Generative Neural Radiance Field (GNeRF) models, which extract implicit 3D representations from 2D images, have recently been shown to produce realistic images representing rigid/semi-rigid objects, such as human faces or cars. However, they usually struggle to generate high-quality images representing non-rigid objects, such as the human body, which is of a great interest for many computer graphics applications. This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a GNeRF with a texture generator. The former learns an implicit 3D representation of the human body and outputs a set of 2D semantic segmentation masks. The latter transforms these semantic masks into a real image, adding a realistic texture to the human appearance. Without requiring additional 3D information, our model can learn 3D human representations with a photo-realistic, controllable generation. Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines. The code is available at https://github.com/zhangqianhui/3DSGAN.
2022
European Conference on Computer Vision (ECCV)
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
Springer
978-3-031-19783-3
978-3-031-19784-0
Zhang, Jichao; Sangineto, Enver; Tang, Hao; Siarohin, Aliaksandr; Zhong, Zhun; Sebe, Nicu; Wang, Wei
3D-Aware Semantic-Guided Generative Model for Human Synthesis / Zhang, Jichao; Sangineto, Enver; Tang, Hao; Siarohin, Aliaksandr; Zhong, Zhun; Sebe, Nicu; Wang, Wei. - 13675:(2022), pp. 339-356. (Intervento presentato al convegno 17th European Conference on Computer Vision, ECCV 2022 tenutosi a Tel AvivI, Israel nel 23–27 October 2022) [10.1007/978-3-031-19784-0_20].
File in questo prodotto:
File Dimensione Formato  
136750337.pdf

Open Access dal 01/11/2023

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.37 MB
Formato Adobe PDF
6.37 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361307
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 7
  • OpenAlex ND
social impact