Theories like the picture superiority effect state that the visual modality has substantial advantage over the other human senses. This makes visual information vital in the acquisition of knowledge, such as in the learning of a language. Words can be graphically represented to illustrate the meaning of a message and facilitate its understanding. This method, however, becomes a limitation in the case of abstract words, like accept, belong, integrate and agree, which have no visual referent. The current research turns to sign languages to explore the common semantic elements that link words to each other. Such visual languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of creating clusters of iconic meanings along with their respective graphic representation. By using sign language insight and VerbNet's organisation of verb predicates, this study presents a novel organisation of 506 English abstract verbs classified by visual shape. Graphic animation was used to visually represent the 20 classes of abstract verbs developed. To build confidence on the resulting product, which can be accessed on www.vroav.online, an online survey was created to achieve judgements on the visuals' representativeness. Considerable agreement between participants was found, suggesting a positive way forward for this work, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.

The visual representation of abstract verbs: Merging verb classification with iconicity in sign language / Scicluna, Simone; Strapparava, Carlo. - (2019), pp. 73-80. (Intervento presentato al convegno IEEE International Conference on Cognitive Computing tenutosi a Milan, Italy nel July 2019) [10.1109/ICCC.2019.00025].

The visual representation of abstract verbs: Merging verb classification with iconicity in sign language

Carlo Strapparava
2019-01-01

Abstract

Theories like the picture superiority effect state that the visual modality has substantial advantage over the other human senses. This makes visual information vital in the acquisition of knowledge, such as in the learning of a language. Words can be graphically represented to illustrate the meaning of a message and facilitate its understanding. This method, however, becomes a limitation in the case of abstract words, like accept, belong, integrate and agree, which have no visual referent. The current research turns to sign languages to explore the common semantic elements that link words to each other. Such visual languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of creating clusters of iconic meanings along with their respective graphic representation. By using sign language insight and VerbNet's organisation of verb predicates, this study presents a novel organisation of 506 English abstract verbs classified by visual shape. Graphic animation was used to visually represent the 20 classes of abstract verbs developed. To build confidence on the resulting product, which can be accessed on www.vroav.online, an online survey was created to achieve judgements on the visuals' representativeness. Considerable agreement between participants was found, suggesting a positive way forward for this work, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.
2019
Proceedings of 2019 IEEE International Conference on Cognitive Computing
USA
IEEE
978-1-7281-2711-8
Scicluna, Simone; Strapparava, Carlo
The visual representation of abstract verbs: Merging verb classification with iconicity in sign language / Scicluna, Simone; Strapparava, Carlo. - (2019), pp. 73-80. (Intervento presentato al convegno IEEE International Conference on Cognitive Computing tenutosi a Milan, Italy nel July 2019) [10.1109/ICCC.2019.00025].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/342882
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact