Applications like personal assistants need to be aware of the user’s context, e.g., where they are, what they are doing, and with whom. Context information is usually inferred from sensor data, like GPS sensors and accelerometers on the user’ smartphone. This prediction task is known as context recognition. A well-defined context model is fundamental for successful recognition. Existing models, however, have two major limitations. First, they focus on few aspects, like location or activity, meaning that recognition methods based on them can only compute and leverage few inter-aspect correlations. Second, existing models typically assume that context is objective, whereas in most applications context is best viewed from the user’s perspective. Neglecting these factors limits the usefulness of the context model and hinders recognition. We present a novel ontological context model that captures four dimensions, namely time, location, activity, and social relations. Moreover, our model defines three levels of description (objective context, machine context, and subjective context) that naturally support subjective annotations and reasoning. An initial context recognition experiment on real-world data hints at the promise of our model.

Multi-Modal Subjective Context Modelling and Recognition / Qiang, Shen; Teso, Stefano; Zhang, Wanyi; Xu, Hao; Giunchiglia, Fausto. - 2787:(2020), pp. 32-36. ( 11th International Workshop Modelling and Reasoning in Context co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020) Santiago De Compostela, Spain 29 August - 3 September 2020).

Multi-Modal Subjective Context Modelling and Recognition

Qiang Shen;Stefano Teso;Wanyi Zhang;Hao Xu;Fausto Giunchiglia
2020-01-01

Abstract

Applications like personal assistants need to be aware of the user’s context, e.g., where they are, what they are doing, and with whom. Context information is usually inferred from sensor data, like GPS sensors and accelerometers on the user’ smartphone. This prediction task is known as context recognition. A well-defined context model is fundamental for successful recognition. Existing models, however, have two major limitations. First, they focus on few aspects, like location or activity, meaning that recognition methods based on them can only compute and leverage few inter-aspect correlations. Second, existing models typically assume that context is objective, whereas in most applications context is best viewed from the user’s perspective. Neglecting these factors limits the usefulness of the context model and hinders recognition. We present a novel ontological context model that captures four dimensions, namely time, location, activity, and social relations. Moreover, our model defines three levels of description (objective context, machine context, and subjective context) that naturally support subjective annotations and reasoning. An initial context recognition experiment on real-world data hints at the promise of our model.
2020
Proceedings of the Eleventh International Workshop Modelling and Reasoning in Context co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020)
Santiago de Compostela, Galicia, Spain
CEUR-WS
Qiang, Shen; Teso, Stefano; Zhang, Wanyi; Xu, Hao; Giunchiglia, Fausto
Multi-Modal Subjective Context Modelling and Recognition / Qiang, Shen; Teso, Stefano; Zhang, Wanyi; Xu, Hao; Giunchiglia, Fausto. - 2787:(2020), pp. 32-36. ( 11th International Workshop Modelling and Reasoning in Context co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020) Santiago De Compostela, Spain 29 August - 3 September 2020).
File in questo prodotto:
File Dimensione Formato  
2020-ECAI-MRC-multi-modal context modelling and recognition.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.13 MB
Formato Adobe PDF
1.13 MB Adobe PDF Visualizza/Apri
paper5.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 921.72 kB
Formato Adobe PDF
921.72 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/277398
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact