Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging. © 2012 IEEE.

An empirical study about the effectiveness of debugging when random test cases are used / Ceccato, M.; Marchetto, A.; Mariani, L.; Nguyen, C. D.; Tonella, P.. - (2012), pp. 452-462. (Intervento presentato al convegno 34th International Conference on Software Engineering, ICSE 2012 tenutosi a Zurich, che nel 2012) [10.1109/ICSE.2012.6227170].

An empirical study about the effectiveness of debugging when random test cases are used

Marchetto A.;
2012-01-01

Abstract

Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging. © 2012 IEEE.
2012
Proceedings - International Conference on Software Engineering
USA
IEEE
978-1-4673-1066-6
978-1-4673-1067-3
Ceccato, M.; Marchetto, A.; Mariani, L.; Nguyen, C. D.; Tonella, P.
An empirical study about the effectiveness of debugging when random test cases are used / Ceccato, M.; Marchetto, A.; Mariani, L.; Nguyen, C. D.; Tonella, P.. - (2012), pp. 452-462. (Intervento presentato al convegno 34th International Conference on Software Engineering, ICSE 2012 tenutosi a Zurich, che nel 2012) [10.1109/ICSE.2012.6227170].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/331376
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 15
social impact