Camera resectioning is essential in computer vision and 3D reconstruction to estimate the position of matching pinhole cameras in 3D worlds. While the internal camera parameters are usually known or can be easily computed offline, in camera networks extrinsic parameters need to be computed each time a camera changes position, thus not allowing for smooth and dynamic network reconfiguration. In this work we propose a fully markerless, unsupervised, and automatic tool for the estimation of the extrinsic parameters of a camera network, based on 3D human mesh recovery from RGB videos. We show how it is possible to retrieve, from monocular images and with just a weak prior knowledge of the intrinsic parameters, the real-world position of the cameras in the network, together with the floor plane. Our solution also works with a single RGB camera and allows the user to dynamically add, re-position, or remove cameras from the network.

Unsupervised continuous camera network pose estimation through human mesh recovery / Garau, Nicola; Conci, N.. - (2019), pp. 1-6. (Intervento presentato al convegno 13th International Conference on Distributed Smart Cameras, ICDSC 2019 tenutosi a Trento, Italy nel 9-11 September, 2019) [10.1145/3349801.3349803].

Unsupervised continuous camera network pose estimation through human mesh recovery

Garau, Nicola;Conci N.
2019-01-01

Abstract

Camera resectioning is essential in computer vision and 3D reconstruction to estimate the position of matching pinhole cameras in 3D worlds. While the internal camera parameters are usually known or can be easily computed offline, in camera networks extrinsic parameters need to be computed each time a camera changes position, thus not allowing for smooth and dynamic network reconfiguration. In this work we propose a fully markerless, unsupervised, and automatic tool for the estimation of the extrinsic parameters of a camera network, based on 3D human mesh recovery from RGB videos. We show how it is possible to retrieve, from monocular images and with just a weak prior knowledge of the intrinsic parameters, the real-world position of the cameras in the network, together with the floor plane. Our solution also works with a single RGB camera and allows the user to dynamically add, re-position, or remove cameras from the network.
2019
ACM International Conference Proceeding Series
usa
Association for Computing Machinery
9781450371896
Garau, Nicola; Conci, N.
Unsupervised continuous camera network pose estimation through human mesh recovery / Garau, Nicola; Conci, N.. - (2019), pp. 1-6. (Intervento presentato al convegno 13th International Conference on Distributed Smart Cameras, ICDSC 2019 tenutosi a Trento, Italy nel 9-11 September, 2019) [10.1145/3349801.3349803].
File in questo prodotto:
File Dimensione Formato  
3349801.3349803.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.38 MB
Formato Adobe PDF
3.38 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/251254
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact