Camera resectioning is essential in computer vision and 3D reconstruction to estimate the position of matching pinhole cameras in 3D worlds. While the internal camera parameters are usually known or can be easily computed offline, in camera networks extrinsic parameters need to be computed each time a camera changes position, thus not allowing for smooth and dynamic network reconfiguration. In this work we propose a fully markerless, unsupervised, and automatic tool for the estimation of the extrinsic parameters of a camera network, based on 3D human mesh recovery from RGB videos. We show how it is possible to retrieve, from monocular images and with just a weak prior knowledge of the intrinsic parameters, the real-world position of the cameras in the network, together with the floor plane. Our solution also works with a single RGB camera and allows the user to dynamically add, re-position, or remove cameras from the network.

Demo: Unsupervised camera pose estimation through human mesh recovery / Garau, N.; Conci, N.. - (2019), pp. 1-2. (Intervento presentato al convegno 13th International Conference on Distributed Smart Cameras, ICDSC 2019 tenutosi a Trento, Italy nel September, 2019) [10.1145/3349801.3357138].

Demo: Unsupervised camera pose estimation through human mesh recovery

Garau N.;Conci N.
2019-01-01

Abstract

Camera resectioning is essential in computer vision and 3D reconstruction to estimate the position of matching pinhole cameras in 3D worlds. While the internal camera parameters are usually known or can be easily computed offline, in camera networks extrinsic parameters need to be computed each time a camera changes position, thus not allowing for smooth and dynamic network reconfiguration. In this work we propose a fully markerless, unsupervised, and automatic tool for the estimation of the extrinsic parameters of a camera network, based on 3D human mesh recovery from RGB videos. We show how it is possible to retrieve, from monocular images and with just a weak prior knowledge of the intrinsic parameters, the real-world position of the cameras in the network, together with the floor plane. Our solution also works with a single RGB camera and allows the user to dynamically add, re-position, or remove cameras from the network.
2019
ACM International Conference Proceeding Series
USA
Association for Computing Machinery
9781450371896
Garau, N.; Conci, N.
Demo: Unsupervised camera pose estimation through human mesh recovery / Garau, N.; Conci, N.. - (2019), pp. 1-2. (Intervento presentato al convegno 13th International Conference on Distributed Smart Cameras, ICDSC 2019 tenutosi a Trento, Italy nel September, 2019) [10.1145/3349801.3357138].
File in questo prodotto:
File Dimensione Formato  
_Demo_Paper__ICDSC_2019 (1).pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 697.19 kB
Formato Adobe PDF
697.19 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/251250
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact