Human pose estimation (HPE) from RGB and depth images has recently experienced a push for viewpoint-invariant and scale-invariant pose retrieval methods. Current methods fail to generalize to unconventional viewpoints due to the lack of viewpoint-invariant data at training time. Existing datasets do not provide multiple-viewpoint observations and mostly focus on frontal views. In this work, we introduce PanopTOP, a fully automatic framework for the generation of semi-synthetic RGB and depth samples with 2D and 3D ground truth of pedestrian poses from multiple arbitrary viewpoints. Starting from the Panoptic Dataset [15], we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view. Finally, we provide baseline results and cross-validation tests for our dataset, demonstrating how it is possible to generalize from the semi-synthetic to the real-world domain. The dataset and the code will be made publicly available upon acceptance.

PanopTOP: A framework for generating viewpoint-invariant human pose estimation datasets / Garau, N.; Martinelli, G.; Brodka, P.; Bisagno, N.; Conci, N.. - 2021-:(2021), pp. 234-242. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 tenutosi a virtual nel 11-17 October 2021) [10.1109/ICCVW54120.2021.00031].

PanopTOP: A framework for generating viewpoint-invariant human pose estimation datasets

Garau N.;Martinelli G.;Bisagno N.;Conci N.
2021-01-01

Abstract

Human pose estimation (HPE) from RGB and depth images has recently experienced a push for viewpoint-invariant and scale-invariant pose retrieval methods. Current methods fail to generalize to unconventional viewpoints due to the lack of viewpoint-invariant data at training time. Existing datasets do not provide multiple-viewpoint observations and mostly focus on frontal views. In this work, we introduce PanopTOP, a fully automatic framework for the generation of semi-synthetic RGB and depth samples with 2D and 3D ground truth of pedestrian poses from multiple arbitrary viewpoints. Starting from the Panoptic Dataset [15], we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view. Finally, we provide baseline results and cross-validation tests for our dataset, demonstrating how it is possible to generalize from the semi-synthetic to the real-world domain. The dataset and the code will be made publicly available upon acceptance.
2021
Proceedings of the IEEE International Conference on Computer Vision
Piscataway NJ
Institute of Electrical and Electronics Engineers Inc.
978-1-6654-0191-3
Garau, N.; Martinelli, G.; Brodka, P.; Bisagno, N.; Conci, N.
PanopTOP: A framework for generating viewpoint-invariant human pose estimation datasets / Garau, N.; Martinelli, G.; Brodka, P.; Bisagno, N.; Conci, N.. - 2021-:(2021), pp. 234-242. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 tenutosi a virtual nel 11-17 October 2021) [10.1109/ICCVW54120.2021.00031].
File in questo prodotto:
File Dimensione Formato  
Garau_PanopTOP_A_Framework_for_Generating_Viewpoint-Invariant_Human_Pose_Estimation_Datasets_ICCVW_2021_paper.pdf

accesso aperto

Descrizione: ICCV Open Access version
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.94 MB
Formato Adobe PDF
1.94 MB Adobe PDF Visualizza/Apri
DECA_Deep_viewpoint-Equivariant_human_pose_estimation_using_Capsule_Autoencoders.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.28 MB
Formato Adobe PDF
3.28 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/330704
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 0
social impact