Deep networks have brought significant advances in robot perception, enabling to improve the capabilities of robots in several visual tasks, ranging from object detection and recognition to pose estimation, semantic scene segmentation and many others. Still, most approaches typically address visual tasks in isolation, resulting in overspecialized models which achieve strong performances in specific applications but work poorly in other (often related) tasks. This is clearly sub-optimal for a robot which is often required to perform simultaneously multiple visual recognition tasks in order to properly act and interact with the environment. This problem is exacerbated by the limited computational and memory resources typically available onboard to a robotic platform. The problem of learning flexible models which can handle multiple tasks in a lightweight manner has recently gained attention in the computer vision community and benchmarks supporting this research have been proposed. In this work we study this problem in the robot vision context, proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art algorithms in this novel challenging scenario. We also define a new evaluation protocol, better suited to the robot vision setting. Results shed light on the strengths and weaknesses of existing approaches and on open issues, suggesting directions for future research.

The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots / Cermelli, Fabio; Mancini, Massimiliano; Ricci, Elisa; Caputo, Barbara. - (2019), pp. 6097-6104. ( 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 Macau; China 4-8 Novembre 2019) [10.1109/IROS40897.2019.8968562].

The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots

Massimiliano Mancini;Elisa Ricci;
2019-01-01

Abstract

Deep networks have brought significant advances in robot perception, enabling to improve the capabilities of robots in several visual tasks, ranging from object detection and recognition to pose estimation, semantic scene segmentation and many others. Still, most approaches typically address visual tasks in isolation, resulting in overspecialized models which achieve strong performances in specific applications but work poorly in other (often related) tasks. This is clearly sub-optimal for a robot which is often required to perform simultaneously multiple visual recognition tasks in order to properly act and interact with the environment. This problem is exacerbated by the limited computational and memory resources typically available onboard to a robotic platform. The problem of learning flexible models which can handle multiple tasks in a lightweight manner has recently gained attention in the computer vision community and benchmarks supporting this research have been proposed. In this work we study this problem in the robot vision context, proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art algorithms in this novel challenging scenario. We also define a new evaluation protocol, better suited to the robot vision setting. Results shed light on the strengths and weaknesses of existing approaches and on open issues, suggesting directions for future research.
2019
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Piscataway, NJ; Los Alamitos, CA; New York, NY
Institute of Electrical and Electronics Engineers Inc.
978-1-7281-4004-9
Cermelli, Fabio; Mancini, Massimiliano; Ricci, Elisa; Caputo, Barbara
The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots / Cermelli, Fabio; Mancini, Massimiliano; Ricci, Elisa; Caputo, Barbara. - (2019), pp. 6097-6104. ( 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 Macau; China 4-8 Novembre 2019) [10.1109/IROS40897.2019.8968562].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/385010
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact