The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project focused on the behaviour of students and how this impacts on their academic performance. We focus on those annotations concerning locations and movements of students, and we evaluate the annotations quality by checking their consistency. Results show that students are highly consistent with respect to the random baseline, and that these results can be improved by exploiting the semantics of annotations.

Assessing Annotation Consistency in the Wild / Giunchiglia, Fausto; Zeni, Mattia; Bignotti, Enrico; Zhang, Wanyi. - (2018), pp. 561-566. (Intervento presentato al convegno IEEE International Conference - Pervasive Computing and Communications Workshops (PerCom Workshops) tenutosi a Atene, Grecia nel 19-23 marzo 2018).

Assessing Annotation Consistency in the Wild

Fausto Giunchiglia;Mattia Zeni;Enrico Bignotti;Wanyi Zhang
2018-01-01

Abstract

The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project focused on the behaviour of students and how this impacts on their academic performance. We focus on those annotations concerning locations and movements of students, and we evaluate the annotations quality by checking their consistency. Results show that students are highly consistent with respect to the random baseline, and that these results can be improved by exploiting the semantics of annotations.
2018
Pervasive Computing and Communications Workshops (PerCom Workshops), 2018 IEEE International Conference
Piscataway, NJ USA
IEEE
978-1-5386-3227-7
978-1-5386-3226-0
Giunchiglia, Fausto; Zeni, Mattia; Bignotti, Enrico; Zhang, Wanyi
Assessing Annotation Consistency in the Wild / Giunchiglia, Fausto; Zeni, Mattia; Bignotti, Enrico; Zhang, Wanyi. - (2018), pp. 561-566. (Intervento presentato al convegno IEEE International Conference - Pervasive Computing and Communications Workshops (PerCom Workshops) tenutosi a Atene, Grecia nel 19-23 marzo 2018).
File in questo prodotto:
File Dimensione Formato  
Assessing Annotation Consistency in the Wild.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.39 MB
Formato Adobe PDF
2.39 MB Adobe PDF Visualizza/Apri
08480236.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.83 MB
Formato Adobe PDF
1.83 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/210434
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact