Motion retargeting aims at transferring a given motion from a source character to a target one. The task becomes increasingly challenging as the differences between the body shape and skeletal structure of input and target characters increase. We present a novel approach for motion retargeting between skeletons whose goal is to transfer the motion from a source skeleton to a target one in a different format. Our approach works when the two skeletons differ in scale, bone length, and number of joints. Surpassing previous approaches, our method can also retarget between skeletons that differ in hierarchy and topology, such as retargeting between animals and humans. We train our method as a transformer using a random masking strategy both in time and space, aiming at reconstructing the joints of the masked input skeleton to obtain a deep representation of the motion. At testing time, our proposal can retarget the input motion to different skeletons, reconstructing the disparities between the source and the target. Our method outperforms state-of-the-art results on the Mixamo dataset, which features a high variance between skeleton formats. Moreover, we show how our approach can effectively generalize to different domains by transferring between human motion and quadrupeds, and vice-versa. Our code is available at www.github.com/mmlab-cv/skeleton-aware-motion-retargeting.
Skeleton-Aware Motion Retargeting Using Masked Pose Modeling / Martinelli, G.; Garau, N.; Bisagno, N.; Conci, N.. - 15624:(2025), pp. 287-303. ( Workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024 ita 2024) [10.1007/978-3-031-92387-6_21].
Skeleton-Aware Motion Retargeting Using Masked Pose Modeling
Martinelli G.;Garau N.;Bisagno N.;Conci N.
2025-01-01
Abstract
Motion retargeting aims at transferring a given motion from a source character to a target one. The task becomes increasingly challenging as the differences between the body shape and skeletal structure of input and target characters increase. We present a novel approach for motion retargeting between skeletons whose goal is to transfer the motion from a source skeleton to a target one in a different format. Our approach works when the two skeletons differ in scale, bone length, and number of joints. Surpassing previous approaches, our method can also retarget between skeletons that differ in hierarchy and topology, such as retargeting between animals and humans. We train our method as a transformer using a random masking strategy both in time and space, aiming at reconstructing the joints of the masked input skeleton to obtain a deep representation of the motion. At testing time, our proposal can retarget the input motion to different skeletons, reconstructing the disparities between the source and the target. Our method outperforms state-of-the-art results on the Mixamo dataset, which features a high variance between skeleton formats. Moreover, we show how our approach can effectively generalize to different domains by transferring between human motion and quadrupeds, and vice-versa. Our code is available at www.github.com/mmlab-cv/skeleton-aware-motion-retargeting.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



