Lexical Semantics is concerned with how words encode mental representations of the world, i.e., concepts. We call this type of concepts, classifcation concepts. In this paper, we focus on Visual Semantics, namely, on how humans build concepts representing what they perceive visually. We call this second type of concepts, substance concepts. As shown in the paper, these two types of concepts are diferent and, furthermore, the mapping between them is many-to-many. In this paper we provide a theory and an algorithm for how to build substance concepts which are in a one-to-one correspondence with classifcations concepts, thus paving the way to the seamless integration between natural language descriptions and visual perception. This work builds upon three main intuitions: (i) substance concepts are modeled as visual objects, namely, sequences of similar frames, as perceived in multiple encounters; (ii) substance concepts are organized into a visual subsumption hierarchy based on the notions of Genus and Diferentia; (iii) the human feedback is exploited not to name objects, but, rather, to align the hierarchy of substance concepts with that of classifcation concepts. The learning algorithm is implemented for the base case of a hierarchy of depth two. The experiments, though preliminary, show that the algorithm manages to acquire the notions of Genus and Diferentia with reasonable accuracy, this despite seeing a small number of examples and receiving supervision on a fraction of them.

Towards Visual Semantics / Giunchiglia, Fausto; Erculiani, Luca; Passerini, Andrea. - In: SN COMPUTER SCIENCE. - ISSN 2661-8907. - 2021/2:(2021), pp. 44601-44617. [10.1007/s42979-021-00839-7]

Towards Visual Semantics

Fausto Giunchiglia;Luca Erculiani;Andrea Passerini
2021-01-01

Abstract

Lexical Semantics is concerned with how words encode mental representations of the world, i.e., concepts. We call this type of concepts, classifcation concepts. In this paper, we focus on Visual Semantics, namely, on how humans build concepts representing what they perceive visually. We call this second type of concepts, substance concepts. As shown in the paper, these two types of concepts are diferent and, furthermore, the mapping between them is many-to-many. In this paper we provide a theory and an algorithm for how to build substance concepts which are in a one-to-one correspondence with classifcations concepts, thus paving the way to the seamless integration between natural language descriptions and visual perception. This work builds upon three main intuitions: (i) substance concepts are modeled as visual objects, namely, sequences of similar frames, as perceived in multiple encounters; (ii) substance concepts are organized into a visual subsumption hierarchy based on the notions of Genus and Diferentia; (iii) the human feedback is exploited not to name objects, but, rather, to align the hierarchy of substance concepts with that of classifcation concepts. The learning algorithm is implemented for the base case of a hierarchy of depth two. The experiments, though preliminary, show that the algorithm manages to acquire the notions of Genus and Diferentia with reasonable accuracy, this despite seeing a small number of examples and receiving supervision on a fraction of them.
2021
Giunchiglia, Fausto; Erculiani, Luca; Passerini, Andrea
Towards Visual Semantics / Giunchiglia, Fausto; Erculiani, Luca; Passerini, Andrea. - In: SN COMPUTER SCIENCE. - ISSN 2661-8907. - 2021/2:(2021), pp. 44601-44617. [10.1007/s42979-021-00839-7]
File in questo prodotto:
File Dimensione Formato  
Giunchiglia2021_Article_TowardsVisualSemantics.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 4.67 MB
Formato Adobe PDF
4.67 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/319687
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
social impact