In this paper, we address the challenge of estimating the 6DoF pose of objects in 2D equirectangular images. This solution allows the transition to the objects' 3D model from their current pose. In particular, it finds application in the educational use of 360 degrees videos, where it enhances the learning experience of students by making it more engaging and immersive due to the possible interaction with 3D virtual models. We developed a general approach usable for any object and shape. The only requirement is to have an accurate CAD model, even without textures of the item, whose pose must be estimated. The developed pipeline has two main steps: vehicle segmentation from the image background and estimation of the vehicle pose. To accomplish the first task, we used deep learning methods, while for the second, we developed a 360 degrees camera simulator in Unity to generate synthetic equirectangular images used for comparison. We conducted our tests using a miniature truck model whose CAD was at our disposal. The developed algorithm was tested using a metrological analysis applied to real data. The results showed a mean difference of 1.5 degrees with a standard deviation of 1 degrees from the ground truth data for rotations, and 1.4 cm with a standard deviation of 1.5 cm for translations over a research range of +/- 20 degrees and +/- 20 cm, respectively.

Object Pose Detection to Enable 3D Interaction from 2D Equirectangular Images in Mixed Reality Educational Settings / Zanetti, Matteo; Luchetti, Alessandro; Maheshwari, Sharad; Kalkofen, Denis; Ortega, Manuel Labrador; De Cecco, Mariolino. - In: APPLIED SCIENCES. - ISSN 2076-3417. - 12:11(2022), p. 5309. [10.3390/app12115309]

Object Pose Detection to Enable 3D Interaction from 2D Equirectangular Images in Mixed Reality Educational Settings

Zanetti, Matteo
Primo
;
Luchetti, Alessandro
Secondo
;
De Cecco, Mariolino
Ultimo
2022-01-01

Abstract

In this paper, we address the challenge of estimating the 6DoF pose of objects in 2D equirectangular images. This solution allows the transition to the objects' 3D model from their current pose. In particular, it finds application in the educational use of 360 degrees videos, where it enhances the learning experience of students by making it more engaging and immersive due to the possible interaction with 3D virtual models. We developed a general approach usable for any object and shape. The only requirement is to have an accurate CAD model, even without textures of the item, whose pose must be estimated. The developed pipeline has two main steps: vehicle segmentation from the image background and estimation of the vehicle pose. To accomplish the first task, we used deep learning methods, while for the second, we developed a 360 degrees camera simulator in Unity to generate synthetic equirectangular images used for comparison. We conducted our tests using a miniature truck model whose CAD was at our disposal. The developed algorithm was tested using a metrological analysis applied to real data. The results showed a mean difference of 1.5 degrees with a standard deviation of 1 degrees from the ground truth data for rotations, and 1.4 cm with a standard deviation of 1.5 cm for translations over a research range of +/- 20 degrees and +/- 20 cm, respectively.
2022
11
Zanetti, Matteo; Luchetti, Alessandro; Maheshwari, Sharad; Kalkofen, Denis; Ortega, Manuel Labrador; De Cecco, Mariolino
Object Pose Detection to Enable 3D Interaction from 2D Equirectangular Images in Mixed Reality Educational Settings / Zanetti, Matteo; Luchetti, Alessandro; Maheshwari, Sharad; Kalkofen, Denis; Ortega, Manuel Labrador; De Cecco, Mariolino. - In: APPLIED SCIENCES. - ISSN 2076-3417. - 12:11(2022), p. 5309. [10.3390/app12115309]
File in questo prodotto:
File Dimensione Formato  
applsci-12-05309-v2_144dpi_74%.pdf

accesso aperto

Descrizione: PDF compresso
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 790.47 kB
Formato Adobe PDF
790.47 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/392892
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact