In this paper, we address the problem of learning robust cross-domain representations for sketch-based image retrieval (SBIR). While, most SBIR approaches focus on extracting low- and mid-level descriptors for direct feature matching, recent works have shown the benefit of learning coupled feature representations to describe data from two related sources. However, cross-domain representation learning methods are typically cast into non-convex minimization problems that are difficult to optimize, leading to unsatisfactory performance. Inspired by self-paced learning (SPL), a learning methodology designed to overcome convergence issues related to local optima by exploiting the samples in a meaningful order (i.e., easy to hard), we introduce the cross-paced partial curriculum learning (CPPCL) framework. Compared with existing SPL methods which only consider a single modality and cannot deal with prior knowledge, CPPCL is specifically designed to assess the learning pace by jointly handling data from dual sources and modality-specific prior information provided in the form of partial curricula. In addition, thanks to the learned dictionaries, we demonstrate that the proposed CPPCL embeds robust coupled representations for SBIR. Our approach is extensively evaluated on four publicly available datasets (i.e., CUFS, Flickr15K, QueenMary SBIR, and TU-Berlin Extension datasets), showing superior performance over competing SBIR methods.

Cross-Paced Representation Learning with Partial Curricula for Sketch-Based Image Retrieval / Xu, Dan; Alameda-Pineda, Xavier; Song, Jingkuan; Ricci, Elisa; Sebe, Nicu. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1057-7149. - 27:9(2018), pp. 4410-4421. [10.1109/TIP.2018.2837381]

Cross-Paced Representation Learning with Partial Curricula for Sketch-Based Image Retrieval

Xu, Dan;Alameda-Pineda, Xavier;Song, Jingkuan;Ricci, Elisa;Sebe, Nicu
2018-01-01

Abstract

In this paper, we address the problem of learning robust cross-domain representations for sketch-based image retrieval (SBIR). While, most SBIR approaches focus on extracting low- and mid-level descriptors for direct feature matching, recent works have shown the benefit of learning coupled feature representations to describe data from two related sources. However, cross-domain representation learning methods are typically cast into non-convex minimization problems that are difficult to optimize, leading to unsatisfactory performance. Inspired by self-paced learning (SPL), a learning methodology designed to overcome convergence issues related to local optima by exploiting the samples in a meaningful order (i.e., easy to hard), we introduce the cross-paced partial curriculum learning (CPPCL) framework. Compared with existing SPL methods which only consider a single modality and cannot deal with prior knowledge, CPPCL is specifically designed to assess the learning pace by jointly handling data from dual sources and modality-specific prior information provided in the form of partial curricula. In addition, thanks to the learned dictionaries, we demonstrate that the proposed CPPCL embeds robust coupled representations for SBIR. Our approach is extensively evaluated on four publicly available datasets (i.e., CUFS, Flickr15K, QueenMary SBIR, and TU-Berlin Extension datasets), showing superior performance over competing SBIR methods.
2018
9
Xu, Dan; Alameda-Pineda, Xavier; Song, Jingkuan; Ricci, Elisa; Sebe, Nicu
Cross-Paced Representation Learning with Partial Curricula for Sketch-Based Image Retrieval / Xu, Dan; Alameda-Pineda, Xavier; Song, Jingkuan; Ricci, Elisa; Sebe, Nicu. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1057-7149. - 27:9(2018), pp. 4410-4421. [10.1109/TIP.2018.2837381]
File in questo prodotto:
File Dimensione Formato  
08360145.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.32 MB
Formato Adobe PDF
3.32 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/212715
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 12
  • OpenAlex ND
social impact