Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines – particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.

Guideline-based evaluation of web readability / Miniukovich, A.; Scaltritti, M.; Sulpizio, S.; De Angeli, A.. - (2019), pp. 1-12. (Intervento presentato al convegno 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 tenutosi a Glasgow nel 2019) [10.1145/3290605.3300738].

Guideline-based evaluation of web readability

Miniukovich A.;Scaltritti M.;Sulpizio S.;
2019-01-01

Abstract

Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines – particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.
2019
Conference on Human Factors in Computing Systems - Proceedings
New York, NY
Association for Computing Machinery
9781450359702
Miniukovich, A.; Scaltritti, M.; Sulpizio, S.; De Angeli, A.
Guideline-based evaluation of web readability / Miniukovich, A.; Scaltritti, M.; Sulpizio, S.; De Angeli, A.. - (2019), pp. 1-12. (Intervento presentato al convegno 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 tenutosi a Glasgow nel 2019) [10.1145/3290605.3300738].
File in questo prodotto:
File Dimensione Formato  
Miniukovich et al CHI Conference 2019.pdf

Solo gestori archivio

Descrizione: Articolo principale
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 881.77 kB
Formato Adobe PDF
881.77 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/242310
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 30
  • ???jsp.display-item.citation.isi??? 16
social impact