Human Pose Estimation (HPE) aims at retrieving the 3D position of human joints from images or videos. We show that current 3D HPE methods suffer a lack of viewpoint equivariance, namely they tend to fail or perform poorly when dealing with viewpoints unseen at training time. Deep learning methods often rely on either scale-invariant, translation-invariant, or rotation-invariant operations, such as max-pooling. However, the adoption of such procedures does not necessarily improve viewpoint generalization, rather leading to more data-dependent methods. To tackle this issue, we propose a novel capsule autoencoder network with fast Variational Bayes capsule routing, named DECA. By modeling each joint as a capsule entity, combined with the routing algorithm, our approach can preserve the joints’ hierarchical and geometrical structure in the feature space, independently from the viewpoint. By achieving viewpoint equivariance, we drastically reduce the network data dependency at training time, resulting in an improved ability to generalize for unseen viewpoints. In the experimental validation, we outperform other methods on depth images from both seen and unseen viewpoints, both top-view, and front-view. In the RGB domain, the same network gives state-of-the-art results on the challenging viewpoint transfer task, also establishing a new framework for top-view HPE. The code can be found at https://github.com/mmlab-cv/DECA.

DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders / Garau, Nicola; Bisagno, Niccolò; Brodka, Piotr; Conci, Nicola. - (2021), pp. 11657-11666. (Intervento presentato al convegno ICCV 2021 tenutosi a virtual nel 11-17 October 2021) [10.1109/ICCV48922.2021.01147].

DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders

Garau Nicola;Bisagno Niccolò;Conci Nicola
2021-01-01

Abstract

Human Pose Estimation (HPE) aims at retrieving the 3D position of human joints from images or videos. We show that current 3D HPE methods suffer a lack of viewpoint equivariance, namely they tend to fail or perform poorly when dealing with viewpoints unseen at training time. Deep learning methods often rely on either scale-invariant, translation-invariant, or rotation-invariant operations, such as max-pooling. However, the adoption of such procedures does not necessarily improve viewpoint generalization, rather leading to more data-dependent methods. To tackle this issue, we propose a novel capsule autoencoder network with fast Variational Bayes capsule routing, named DECA. By modeling each joint as a capsule entity, combined with the routing algorithm, our approach can preserve the joints’ hierarchical and geometrical structure in the feature space, independently from the viewpoint. By achieving viewpoint equivariance, we drastically reduce the network data dependency at training time, resulting in an improved ability to generalize for unseen viewpoints. In the experimental validation, we outperform other methods on depth images from both seen and unseen viewpoints, both top-view, and front-view. In the RGB domain, the same network gives state-of-the-art results on the challenging viewpoint transfer task, also establishing a new framework for top-view HPE. The code can be found at https://github.com/mmlab-cv/DECA.
2021
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
NJ
IEEE
978-1-6654-2812-5
978-1-6654-2813-2
Garau, Nicola; Bisagno, Niccolò; Brodka, Piotr; Conci, Nicola
DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders / Garau, Nicola; Bisagno, Niccolò; Brodka, Piotr; Conci, Nicola. - (2021), pp. 11657-11666. (Intervento presentato al convegno ICCV 2021 tenutosi a virtual nel 11-17 October 2021) [10.1109/ICCV48922.2021.01147].
File in questo prodotto:
File Dimensione Formato  
Garau_DECA_Deep_Viewpoint-Equivariant_Human_Pose_Estimation_Using_Capsule_Autoencoders_ICCV_2021_paper.pdf

accesso aperto

Descrizione: ICCV paper Open Access Version
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.65 MB
Formato Adobe PDF
2.65 MB Adobe PDF Visualizza/Apri
DECA_Deep_viewpoint-Equivariant_human_pose_estimation_using_Capsule_Autoencoders.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.28 MB
Formato Adobe PDF
3.28 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/330721
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 1
social impact