Recent co-part segmentation methods mostly operate in a supervised learning setting, which requires a large amount of annotated data for training. To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation. Differently from previous works, our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts. To this end, our method relies on pairs of frames sampled from the same video. The network learns to predict part segments together with a representation of the motion between two frames, which permits reconstruction of the target image. Through extensive experimental evaluation on publicly available video sequences we demonstrate that our approach can produce improved segmentation maps with respect to previous self-supervised co-part segmentation approaches.

Motion-supervised Co-part segmentation / Siarohin, Aliaksandr; Roy, Subhankar; Lathuiliere, Stephane; Tulyakov, Sergey; Ricci, Elisa; Sebe, Nicu. - (2021), pp. 9650-9657. (Intervento presentato al convegno 25th International Conference on Pattern Recognition, ICPR 2020 tenutosi a Milan nel 10th-15th January 2021) [10.1109/ICPR48806.2021.9412520].

Motion-supervised Co-part segmentation

Siarohin, Aliaksandr;Roy, Subhankar;Tulyakov, Sergey;Ricci, Elisa;Sebe, Nicu
2021-01-01

Abstract

Recent co-part segmentation methods mostly operate in a supervised learning setting, which requires a large amount of annotated data for training. To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation. Differently from previous works, our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts. To this end, our method relies on pairs of frames sampled from the same video. The network learns to predict part segments together with a representation of the motion between two frames, which permits reconstruction of the target image. Through extensive experimental evaluation on publicly available video sequences we demonstrate that our approach can produce improved segmentation maps with respect to previous self-supervised co-part segmentation approaches.
2021
Proceedings of 25th International Conference on Pattern Recognition
Piscataway, NJ
Institute of Electrical and Electronics Engineers Inc.
978-1-7281-8808-9
Siarohin, Aliaksandr; Roy, Subhankar; Lathuiliere, Stephane; Tulyakov, Sergey; Ricci, Elisa; Sebe, Nicu
Motion-supervised Co-part segmentation / Siarohin, Aliaksandr; Roy, Subhankar; Lathuiliere, Stephane; Tulyakov, Sergey; Ricci, Elisa; Sebe, Nicu. - (2021), pp. 9650-9657. (Intervento presentato al convegno 25th International Conference on Pattern Recognition, ICPR 2020 tenutosi a Milan nel 10th-15th January 2021) [10.1109/ICPR48806.2021.9412520].
File in questo prodotto:
File Dimensione Formato  
09412520.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.35 MB
Formato Adobe PDF
4.35 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/326178
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 8
  • OpenAlex ND
social impact