Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batchnormalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

Kitting in the Wild through Online Domain Adaptation / Mancini, Massimiliano; Karaoguz, Hakan; Ricci, Elisa; Jensfelt, Patric; Caputo, Barbara. - (2018), pp. 1103-1109. (Intervento presentato al convegno IROS tenutosi a Madrid, Spain nel 1-5 October) [10.1109/IROS.2018.8593862].

Kitting in the Wild through Online Domain Adaptation

Mancini, Massimiliano;Ricci, Elisa;
2018-01-01

Abstract

Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batchnormalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.
2018
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Piscataway, NJ USA
IEEE
978-1-5386-8094-0
Mancini, Massimiliano; Karaoguz, Hakan; Ricci, Elisa; Jensfelt, Patric; Caputo, Barbara
Kitting in the Wild through Online Domain Adaptation / Mancini, Massimiliano; Karaoguz, Hakan; Ricci, Elisa; Jensfelt, Patric; Caputo, Barbara. - (2018), pp. 1103-1109. (Intervento presentato al convegno IROS tenutosi a Madrid, Spain nel 1-5 October) [10.1109/IROS.2018.8593862].
File in questo prodotto:
File Dimensione Formato  
kitting.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.2 MB
Formato Adobe PDF
2.2 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/225596
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 44
  • ???jsp.display-item.citation.isi??? 32
social impact