We present a customized 3D mesh Transformer model for the pose transfer task. As the 3D pose transfer essentially is a deformation procedure dependent on the given meshes, the intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism. Specifically, we propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies across the given meshes. Moreover, locally, a simple yet efficient central geodesic contrastive loss is further proposed to improve the regional geometric-inconsistency learning. At last, we present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task towards unknown spaces. The massive experimental results prove the efficacy of our approach by showing state-of-the-art quantitative performances on SMPL-NPT, FAUST and our new proposed dataset SMG3D datasets, as well as promising qualitative results on MGcloth and SMAL datasets. It’s demonstrated that our method can achieve robust 3D pose transfer and be generalized to challenging meshes from unknown spaces on cross-dataset tasks. The code and dataset are made available. Code is available: https://github.com/mikecheninoulu/CGT.

Geometry-Contrastive Transformer for Generalized 3D Pose Transfer / Chen, Haoyu; Tang, Hao; Yu, Zitong; Sebe, Nicu; Zhao, Guoying. - 36:1(2022), pp. 258-266. (Intervento presentato al convegno 36th AAAI Conference on Artificial Intelligence, AAAI 2022 tenutosi a virtual nel 2022) [10.1609/aaai.v36i1.19901].

Geometry-Contrastive Transformer for Generalized 3D Pose Transfer

Tang, Hao;Sebe, Nicu;
2022-01-01

Abstract

We present a customized 3D mesh Transformer model for the pose transfer task. As the 3D pose transfer essentially is a deformation procedure dependent on the given meshes, the intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism. Specifically, we propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies across the given meshes. Moreover, locally, a simple yet efficient central geodesic contrastive loss is further proposed to improve the regional geometric-inconsistency learning. At last, we present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task towards unknown spaces. The massive experimental results prove the efficacy of our approach by showing state-of-the-art quantitative performances on SMPL-NPT, FAUST and our new proposed dataset SMG3D datasets, as well as promising qualitative results on MGcloth and SMAL datasets. It’s demonstrated that our method can achieve robust 3D pose transfer and be generalized to challenging meshes from unknown spaces on cross-dataset tasks. The code and dataset are made available. Code is available: https://github.com/mikecheninoulu/CGT.
2022
AAAI Conference on Artificial Intelligence (AAAI’22)
2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA
Association for the Advancement of Artificial Intelligence
1-57735-876-7
978-1-57735-876-3
Chen, Haoyu; Tang, Hao; Yu, Zitong; Sebe, Nicu; Zhao, Guoying
Geometry-Contrastive Transformer for Generalized 3D Pose Transfer / Chen, Haoyu; Tang, Hao; Yu, Zitong; Sebe, Nicu; Zhao, Guoying. - 36:1(2022), pp. 258-266. (Intervento presentato al convegno 36th AAAI Conference on Artificial Intelligence, AAAI 2022 tenutosi a virtual nel 2022) [10.1609/aaai.v36i1.19901].
File in questo prodotto:
File Dimensione Formato  
19901-Article Text-23914-1-2-20220628.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.2 MB
Formato Adobe PDF
4.2 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361303
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 8
  • OpenAlex ND
social impact