Humans are good at recomposing novel objects, i.e. they can identify commonalities between unknown objects from general structure to finer detail, an ability difficult to replicate by machines. We propose a framework, ISCO, to recompose an object using 3D superquadrics as semantic parts directly from 2D views without training a model that uses 3D supervision. To achieve this, we optimize the superquadric parameters that compose a specific instance of the object, comparing its rendered 3D view and 2D image silhouette. Our ISCO framework iteratively adds new superquadrics wherever the reconstruction error is high, abstracting first coarse regions and then finer details of the target object. With this simple coarse-to-fine inductive bias, ISCO provides consistent superquadrics for related object parts, despite not having any semantic supervision. Since ISCO does not train any neural network, it is also inherently robust to out-of-distribution objects. Experiments show that, compared to recent single instance superquadrics reconstruction approaches, ISCO provides consistently more accurate 3D reconstructions, even from images in the wild. Code available at https://github.com/ExplainableML/ISCO.

Iterative Superquadric Recomposition of 3D Objects from Multiple Views / Alaniz, Stephan; Mancini, Massimiliano; Akata, Zeynep. - (2023), pp. 17967-17977. (Intervento presentato al convegno ICCV tenutosi a Paris, France nel 2nd - 6th October 2023) [10.1109/ICCV51070.2023.01651].

Iterative Superquadric Recomposition of 3D Objects from Multiple Views

Mancini, Massimiliano;
2023-01-01

Abstract

Humans are good at recomposing novel objects, i.e. they can identify commonalities between unknown objects from general structure to finer detail, an ability difficult to replicate by machines. We propose a framework, ISCO, to recompose an object using 3D superquadrics as semantic parts directly from 2D views without training a model that uses 3D supervision. To achieve this, we optimize the superquadric parameters that compose a specific instance of the object, comparing its rendered 3D view and 2D image silhouette. Our ISCO framework iteratively adds new superquadrics wherever the reconstruction error is high, abstracting first coarse regions and then finer details of the target object. With this simple coarse-to-fine inductive bias, ISCO provides consistent superquadrics for related object parts, despite not having any semantic supervision. Since ISCO does not train any neural network, it is also inherently robust to out-of-distribution objects. Experiments show that, compared to recent single instance superquadrics reconstruction approaches, ISCO provides consistently more accurate 3D reconstructions, even from images in the wild. Code available at https://github.com/ExplainableML/ISCO.
2023
2023 IEEE/CVF International Conference on Computer Vision (ICCV)
Piscataway, NJ USA
IEEE Computer Society
979-8-3503-0718-4
Alaniz, Stephan; Mancini, Massimiliano; Akata, Zeynep
Iterative Superquadric Recomposition of 3D Objects from Multiple Views / Alaniz, Stephan; Mancini, Massimiliano; Akata, Zeynep. - (2023), pp. 17967-17977. (Intervento presentato al convegno ICCV tenutosi a Paris, France nel 2nd - 6th October 2023) [10.1109/ICCV51070.2023.01651].
File in questo prodotto:
File Dimensione Formato  
Alaniz_Iterative_Superquadric_Recomposition_of_3D_Objects_from_Multiple_Views_ICCV_2023_paper.pdf

accesso aperto

Descrizione: ICCV paper is the Open Access version, provided by the Computer Vision Foundation.
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.32 MB
Formato Adobe PDF
2.32 MB Adobe PDF Visualizza/Apri
Iterative_Superquadric_Recomposition_of_3D_Objects_from_Multiple_Views.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.86 MB
Formato Adobe PDF
2.86 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400791
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact