Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
Titolo: | Liquid benchmarks: benchmarking-as-a-service | |
Autori: | S., Sakr; Casati, Fabio | |
Autori Unitn: | ||
Autore/i del libro: | AA. VV. | |
Titolo del volume contenente il saggio: | Proceedings of JCDL '11 | |
Luogo di edizione: | New York | |
Casa editrice: | ACM Press | |
Anno di pubblicazione: | 2011 | |
Codice identificativo Scopus: | 2-s2.0-79960494836 | |
Handle: | http://hdl.handle.net/11572/89634 | |
Appare nelle tipologie: | 04.1 Saggio in atti di convegno (Paper in proceedings) |