The evaluation of e-learning applications deserves special attention and evaluators need effective methodologies and appropriate guidelines to perform their task. We have proposed a methodology, called eLSE (e-Learning Systematic Evaluation), which combines a specific inspection technique with user-testing. This inspection aims at allowing inspectors that may not have a wide experience in evaluating e-learning systems to perform accurate evaluations. It is based on the use of evaluation patterns, called Abstract Tasks (ATs), which precisely describe the activities to be performed during inspection. For this reason, it is called AT inspection. In this paper, we present an empirical validation of the AT inspection technique: three groups of novice inspectors evaluated a commercial e-learning system applying the AT inspection, the heuristic inspection, or user-testing. Results have shown an advantage of the AT inspection over the other two usability evaluation methods, demonstrating that Abstract Tasks are effective and efficient tools to drive evaluators and improve their performance. Important methodological considerations on the reliability of usability evaluation techniques are discussed.
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
|Titolo:||Systematic evaluation of e-learning systems: an experimental validation|
|Autori:||C., Ardito; M. F., Costabile; De Angeli, Antonella; R., Lanzilotti|
|Titolo del volume contenente il saggio:||Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles|
|Luogo di edizione:||New York|
|Anno di pubblicazione:||2006|
|Codice identificativo Scopus:||2-s2.0-34547199528|
|Appare nelle tipologie:||04.1 Saggio in atti di convegno (Paper in proceedings)|