Human action representation, recognition and learning is of importance to guarantee a fruitful human-robot cooperation. In this paper, we propose a novel coordinate-free, scale invariant representation of 6D (position and orientation) motion trajectories. The advantages of the proposed invariant representation are twofold. First the performance of gesture recognition can be improved thanks to its invariance to different viewpoints and different body sizes of the actors. Secondly, the proposed representation is bi-directional. Not only the original Cartesian trajectory can be converted into the 6 invariant values, but also the motion in the original space can be retrieved back from the invariants. While the former aspect handles robust human gesture recognition, the latter allows the execution of robot motions without the need to store the Cartesian data. Experimental results illustrate the effectiveness of the proposed invariant representation for gesture recognition and accurate trajectory reconstruction.
A bidirectional invariant representation of motion for gesture recognition and reproduction / Soloperto, R.; Saveriano, M.; Lee, D.. - 2015-:June(2015), pp. 6146-6152. (Intervento presentato al convegno 2015 IEEE International Conference on Robotics and Automation, ICRA 2015 tenutosi a Washington State Convention Center, usa nel 2015) [10.1109/ICRA.2015.7140062].
A bidirectional invariant representation of motion for gesture recognition and reproduction
Saveriano M.;
2015-01-01
Abstract
Human action representation, recognition and learning is of importance to guarantee a fruitful human-robot cooperation. In this paper, we propose a novel coordinate-free, scale invariant representation of 6D (position and orientation) motion trajectories. The advantages of the proposed invariant representation are twofold. First the performance of gesture recognition can be improved thanks to its invariance to different viewpoints and different body sizes of the actors. Secondly, the proposed representation is bi-directional. Not only the original Cartesian trajectory can be converted into the 6 invariant values, but also the motion in the original space can be retrieved back from the invariants. While the former aspect handles robust human gesture recognition, the latter allows the execution of robot motions without the need to store the Cartesian data. Experimental results illustrate the effectiveness of the proposed invariant representation for gesture recognition and accurate trajectory reconstruction.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione