Compositional Zero-Shot learning (CZSL) requires to recognize state-object compositions unseen during training. In this work, instead of assuming prior knowledge about the unseen compositions, we operate in the open world setting, where the search space includes a large number of unseen compositions some of which might be unfeasible. In this setting, we start from the cosine similarity between visual features and compositional embeddings. After estimating the feasibility score of each composition, we use these scores to either directly mask the output space or as a margin for the cosine similarity between visual features and compositional embeddings during training. Our experiments on two standard CZSL benchmarks show that all the methods suffer severe performance degradation when applied in the open world setting. While our simple CZSL model achieves state-of-the-art performances in the closed world scenario, our feasibility scores boost the performance of our approach in the open world setting, clearly outperforming the previous state of the art. Code is available at: https://github.com/ExplainableML/czsl.

Open World Compositional Zero-Shot Learning / Mancini, M.; Naeem, M. F.; Xian, Y.; Akata, Z.. - (2021), pp. 5218-5226. (Intervento presentato al convegno 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 tenutosi a Nashville, TN, USA nel 20-25 June 2021) [10.1109/CVPR46437.2021.00518].

Open World Compositional Zero-Shot Learning

Mancini, M.;
2021-01-01

Abstract

Compositional Zero-Shot learning (CZSL) requires to recognize state-object compositions unseen during training. In this work, instead of assuming prior knowledge about the unseen compositions, we operate in the open world setting, where the search space includes a large number of unseen compositions some of which might be unfeasible. In this setting, we start from the cosine similarity between visual features and compositional embeddings. After estimating the feasibility score of each composition, we use these scores to either directly mask the output space or as a margin for the cosine similarity between visual features and compositional embeddings during training. Our experiments on two standard CZSL benchmarks show that all the methods suffer severe performance degradation when applied in the open world setting. While our simple CZSL model achieves state-of-the-art performances in the closed world scenario, our feasibility scores boost the performance of our approach in the open world setting, clearly outperforming the previous state of the art. Code is available at: https://github.com/ExplainableML/czsl.
2021
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Piscataway, NJ USA
IEEE Computer Society
978-1-6654-4509-2
Mancini, M.; Naeem, M. F.; Xian, Y.; Akata, Z.
Open World Compositional Zero-Shot Learning / Mancini, M.; Naeem, M. F.; Xian, Y.; Akata, Z.. - (2021), pp. 5218-5226. (Intervento presentato al convegno 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 tenutosi a Nashville, TN, USA nel 20-25 June 2021) [10.1109/CVPR46437.2021.00518].
File in questo prodotto:
File Dimensione Formato  
Mancini_Open_World_Compositional_Zero-Shot_Learning_CVPR_2021_paper.pdf

accesso aperto

Descrizione: Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.9 MB
Formato Adobe PDF
1.9 MB Adobe PDF Visualizza/Apri
Open_World_Compositional_Zero-Shot_Learning.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.27 MB
Formato Adobe PDF
1.27 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400764
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 62
  • ???jsp.display-item.citation.isi??? 32
social impact