This paper introduces Click to Move (C2M), a novel framework for video generation where the user can control the motion of the synthesized video through mouse clicks specifying simple object trajectories of the key objects in the scene. Our model receives as input an initial frame, its corresponding segmentation map and the sparse motion vectors encoding the input provided by the user. It outputs a plausible video sequence starting from the given frame and with a motion that is consistent with user input. Notably, our proposed deep architecture incorporates a Graph Convolution Network (GCN) modelling the movements of all the objects in the scene in a holistic manner and effectively combining the sparse user motion information and image features. Experimental results show that C2M outperforms existing methods on two publicly available datasets, thus demonstrating the effectiveness of our GCN framework at modelling object interactions. The source code is publicly available at https://github.com/PierfrancescoArdino/C2M.

Click to Move: Controlling Video Generation with Sparse Motion / Ardino, P.; De Nadai, M.; Lepri, B.; Ricci, E.; Lathuiliere, S.. - (2021), pp. 14729-14738. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 tenutosi a Virtual nel 2021) [10.1109/ICCV48922.2021.01448].

Click to Move: Controlling Video Generation with Sparse Motion

Ardino P.;De Nadai M.;Lepri B.;Ricci E.;
2021-01-01

Abstract

This paper introduces Click to Move (C2M), a novel framework for video generation where the user can control the motion of the synthesized video through mouse clicks specifying simple object trajectories of the key objects in the scene. Our model receives as input an initial frame, its corresponding segmentation map and the sparse motion vectors encoding the input provided by the user. It outputs a plausible video sequence starting from the given frame and with a motion that is consistent with user input. Notably, our proposed deep architecture incorporates a Graph Convolution Network (GCN) modelling the movements of all the objects in the scene in a holistic manner and effectively combining the sparse user motion information and image features. Experimental results show that C2M outperforms existing methods on two publicly available datasets, thus demonstrating the effectiveness of our GCN framework at modelling object interactions. The source code is publicly available at https://github.com/PierfrancescoArdino/C2M.
2021
Proceedings of the IEEE International Conference on Computer Vision
Virtual
Institute of Electrical and Electronics Engineers Inc.
978-1-6654-2812-5
Ardino, P.; De Nadai, M.; Lepri, B.; Ricci, E.; Lathuiliere, S.
Click to Move: Controlling Video Generation with Sparse Motion / Ardino, P.; De Nadai, M.; Lepri, B.; Ricci, E.; Lathuiliere, S.. - (2021), pp. 14729-14738. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 tenutosi a Virtual nel 2021) [10.1109/ICCV48922.2021.01448].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/341652
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact