We present a new multi-modal technique for assisting visually-impaired people in recognizing objects in public indoor environment. Unlike common methods which aim to solve the problem of multi-class object recognition in a traditional single-label strategy, a comprehensive approach is developed here allowing samples to take more than one label at a time. We jointly use appearance and depth cues, specifically RGBD images, to overcome issues of traditional vision systems using a new complex-valued representation. Inspired by complex-valued neural networks (CVNNs) and multi-label learning techniques, we propose two methods in order to associate each input RGBD image to a set of labels corresponding to the object categories recognized at once. The first one, ML-CVNN, is formalized as a ranking strategy where we make use of a fully complex-valued RBF network and extend it to be able to solve multi-label problems using an adaptive clustering method. The second method, L-CVNNs, deals with problem transformation strategy where instead of using a single network to formalize the classification problem as a ranking solution for the whole label set, we propose to construct one CVNN for each label where the predicted labels will be later aggregated to construct the resulting multi-label vector. Extensive experiments have been carried on two newly collected multi-labeled RGBD datasets prove the efficiency of the proposed techniques.

Indoor object recognition in RGBD images with complex-valued neural networks for visually-impaired people / Trabelsi, R.; Jabri, I.; Melgani, F.; Smach, F.; Conci, N.; Bouallegue, A.. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 330:(2019), pp. 94-103. [10.1016/j.neucom.2018.11.032]

Indoor object recognition in RGBD images with complex-valued neural networks for visually-impaired people

Melgani F.;Conci N.;
2019-01-01

Abstract

We present a new multi-modal technique for assisting visually-impaired people in recognizing objects in public indoor environment. Unlike common methods which aim to solve the problem of multi-class object recognition in a traditional single-label strategy, a comprehensive approach is developed here allowing samples to take more than one label at a time. We jointly use appearance and depth cues, specifically RGBD images, to overcome issues of traditional vision systems using a new complex-valued representation. Inspired by complex-valued neural networks (CVNNs) and multi-label learning techniques, we propose two methods in order to associate each input RGBD image to a set of labels corresponding to the object categories recognized at once. The first one, ML-CVNN, is formalized as a ranking strategy where we make use of a fully complex-valued RBF network and extend it to be able to solve multi-label problems using an adaptive clustering method. The second method, L-CVNNs, deals with problem transformation strategy where instead of using a single network to formalize the classification problem as a ranking solution for the whole label set, we propose to construct one CVNN for each label where the predicted labels will be later aggregated to construct the resulting multi-label vector. Extensive experiments have been carried on two newly collected multi-labeled RGBD datasets prove the efficiency of the proposed techniques.
2019
Trabelsi, R.; Jabri, I.; Melgani, F.; Smach, F.; Conci, N.; Bouallegue, A.
Indoor object recognition in RGBD images with complex-valued neural networks for visually-impaired people / Trabelsi, R.; Jabri, I.; Melgani, F.; Smach, F.; Conci, N.; Bouallegue, A.. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 330:(2019), pp. 94-103. [10.1016/j.neucom.2018.11.032]
File in questo prodotto:
File Dimensione Formato  
Neurocomputing-2019.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.24 MB
Formato Adobe PDF
1.24 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/250859
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 12
social impact