Motion retargeting aims at transferring a given motion from a source character to a target one. The task becomes increasingly challenging as the differences between the body shape and skeletal structure of input and target characters increase. We present a novel approach for motion retargeting between skeletons whose goal is to transfer the motion from a source skeleton to a target one in a different format. Our approach works when the two skeletons differ in scale, bone length, and number of joints. Surpassing previous approaches, our method can also retarget between skeletons that differ in hierarchy and topology, such as retargeting between animals and humans. We train our method as a transformer using a random masking strategy both in time and space, aiming at reconstructing the joints of the masked input skeleton to obtain a deep representation of the motion. At testing time, our proposal can retarget the input motion to different skeletons, reconstructing the disparities between the source and the target. Our method outperforms state-of-the-art results on the Mixamo dataset, which features a high variance between skeleton formats. Moreover, we show how our approach can effectively generalize to different domains by transferring between human motion and quadrupeds, and vice-versa. Our code is available at www.github.com/mmlab-cv/skeleton-aware-motion-retargeting.

Skeleton-Aware Motion Retargeting Using Masked Pose Modeling / Martinelli, G.; Garau, N.; Bisagno, N.; Conci, N.. - 15624:(2025), pp. 287-303. ( Workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024 ita 2024) [10.1007/978-3-031-92387-6_21].

Skeleton-Aware Motion Retargeting Using Masked Pose Modeling

Martinelli G.;Garau N.;Bisagno N.;Conci N.
2025-01-01

Abstract

Motion retargeting aims at transferring a given motion from a source character to a target one. The task becomes increasingly challenging as the differences between the body shape and skeletal structure of input and target characters increase. We present a novel approach for motion retargeting between skeletons whose goal is to transfer the motion from a source skeleton to a target one in a different format. Our approach works when the two skeletons differ in scale, bone length, and number of joints. Surpassing previous approaches, our method can also retarget between skeletons that differ in hierarchy and topology, such as retargeting between animals and humans. We train our method as a transformer using a random masking strategy both in time and space, aiming at reconstructing the joints of the masked input skeleton to obtain a deep representation of the motion. At testing time, our proposal can retarget the input motion to different skeletons, reconstructing the disparities between the source and the target. Our method outperforms state-of-the-art results on the Mixamo dataset, which features a high variance between skeleton formats. Moreover, we show how our approach can effectively generalize to different domains by transferring between human motion and quadrupeds, and vice-versa. Our code is available at www.github.com/mmlab-cv/skeleton-aware-motion-retargeting.
2025
Lecture Notes in Computer Science
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
Springer Science and Business Media Deutschland GmbH
9783031923869
9783031923876
Martinelli, G.; Garau, N.; Bisagno, N.; Conci, N.
Skeleton-Aware Motion Retargeting Using Masked Pose Modeling / Martinelli, G.; Garau, N.; Bisagno, N.; Conci, N.. - 15624:(2025), pp. 287-303. ( Workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024 ita 2024) [10.1007/978-3-031-92387-6_21].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/465957
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact