Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results

Liquid benchmarks: benchmarking-as-a-service

Casati, Fabio
2011-01-01

Abstract

Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results
2011
Proceedings of JCDL '11
AA. VV.
New York
ACM Press
S., Sakr; Casati, Fabio
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/89634
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact