We are interested in aligning how people think about objects and what machines perceive, meaning by this the fact that object recognition, as performed by a machine, should follow a process which resembles that followed by humans when thinking of an object associated with a certain concept. The ultimate goal is to build systems which can meaningfully interact with their users, describing what they perceive in the users' own terms. As from the field of Lexical Semantics, humans organize the meaning of words in hierarchies where the meaning of, e.g., a noun, is defined in terms of the meaning of a more general noun, its genus, and of one or more differentiating properties, its differentia. The main tenet of this paper is that object recognition should implement a hierarchical process which follows the hierarchical semantic structure used to define the meaning of words. We achieve this goal by implementing an algorithm which, for any object, recursively recognizes its visual genus and its visual differentia. In other words, the recognition of an object is decomposed in a sequence of steps where the locally relevant visual features are recognized. This paper presents the algorithm and a first evaluation.

Egocentric Hierarchical Visual Semantics / Erculiani, Luca; Bontempelli, Andrea; Passerini, Andrea; Giunchiglia, Fausto. - 368:(2023), pp. 320-329. (Intervento presentato al convegno 2nd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2023 tenutosi a München nel 26th-30th June 2023) [10.3233/FAIA230095].

Egocentric Hierarchical Visual Semantics

Luca ERCULIANI
Primo
;
Andrea BONTEMPELLI
Secondo
;
Andrea PASSERINI
Penultimo
;
Fausto GIUNCHIGLIA
Ultimo
2023-01-01

Abstract

We are interested in aligning how people think about objects and what machines perceive, meaning by this the fact that object recognition, as performed by a machine, should follow a process which resembles that followed by humans when thinking of an object associated with a certain concept. The ultimate goal is to build systems which can meaningfully interact with their users, describing what they perceive in the users' own terms. As from the field of Lexical Semantics, humans organize the meaning of words in hierarchies where the meaning of, e.g., a noun, is defined in terms of the meaning of a more general noun, its genus, and of one or more differentiating properties, its differentia. The main tenet of this paper is that object recognition should implement a hierarchical process which follows the hierarchical semantic structure used to define the meaning of words. We achieve this goal by implementing an algorithm which, for any object, recursively recognizes its visual genus and its visual differentia. In other words, the recognition of an object is decomposed in a sequence of steps where the locally relevant visual features are recognized. This paper presents the algorithm and a first evaluation.
2023
Proceedings of the Second International Conference on Hybrid Human-Machine Intelligence
Amsterdam
IOS Press BV
978-1-64368-394-2
978-1-64368-395-9
Erculiani, Luca; Bontempelli, Andrea; Passerini, Andrea; Giunchiglia, Fausto
Egocentric Hierarchical Visual Semantics / Erculiani, Luca; Bontempelli, Andrea; Passerini, Andrea; Giunchiglia, Fausto. - 368:(2023), pp. 320-329. (Intervento presentato al convegno 2nd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2023 tenutosi a München nel 26th-30th June 2023) [10.3233/FAIA230095].
File in questo prodotto:
File Dimensione Formato  
hhai23_paper.pdf

accesso aperto

Descrizione: main paper
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Creative commons
Dimensione 728.54 kB
Formato Adobe PDF
728.54 kB Adobe PDF Visualizza/Apri
FAIA-368-FAIA230095.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 534.65 kB
Formato Adobe PDF
534.65 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/377372
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact