Crowdsourcing is the outsourcing of a unit of work to a crowd of people via an open call for contributions. While there are various forms of crowdsourcing, such as open innovation, civic engagement and crowdfunding in this work we specifically focus on microtasking. Microtasking is a branch of crowdsourcing, where a work is presented as a set of identical microtasks, each requiring contributors only several minutes to complete usually in exchange for a reward of less than 1 USD. Labeling images, transcribing documents, analyzing sentiments of short sentences and cleaning datasets are popular examples of work which could be solved as microtasks. Available up to date microtask crowdsourcing platforms, such as CrowdFlower and Amazon Mechanical Turk, allow thousands of microtasks to be solved in parallel by hundreds of contributors available online. To tackle the problem of quality in microtask crowdsourcing, it is necessary to study different quality attributes, to investigate what causes low quality of results and slow task execution in microtask crowdsourcing, to identify effective methods to both assess and assure that these quality attributes are of high level. We conducted the most extensive literature review analysis of quality attributes, assessment and assurance techniques ever done in the area of microtasking and crowdsourcing in general. We further advanced the state of the art in three research tracks: i) Improving accuracy and execution speed (the major track), where we monitor in-page user activity of each individual worker, automatically predict abandoned assignments causing delays and assignments with low quality of results, and relaunch them to other workers using our tool ReLauncher; ii) Crowdsourcing complex processes, where we introduce BPMN-extensions to design business processes of both crowd and machine tasks, and the crowdsourcing platform Crowd Computer to deploy these tasks; and iii) Improving workers user experience, where we identify problems workers face searching for tasks to work on, address these problems in our prototype of the task listing interface and introduce a new mobile crowdsourcing platform, CrowdCafe, designed in a way to optimize task searching time and to motivate workers with tangible rewards, such as a coffee.

Quality Assurance Strategies in Microtask Crowdsourcing / Kucherbaev, Pavel. - (2016), pp. 1-75.

Quality Assurance Strategies in Microtask Crowdsourcing

Kucherbaev, Pavel
2016-01-01

Abstract

Crowdsourcing is the outsourcing of a unit of work to a crowd of people via an open call for contributions. While there are various forms of crowdsourcing, such as open innovation, civic engagement and crowdfunding in this work we specifically focus on microtasking. Microtasking is a branch of crowdsourcing, where a work is presented as a set of identical microtasks, each requiring contributors only several minutes to complete usually in exchange for a reward of less than 1 USD. Labeling images, transcribing documents, analyzing sentiments of short sentences and cleaning datasets are popular examples of work which could be solved as microtasks. Available up to date microtask crowdsourcing platforms, such as CrowdFlower and Amazon Mechanical Turk, allow thousands of microtasks to be solved in parallel by hundreds of contributors available online. To tackle the problem of quality in microtask crowdsourcing, it is necessary to study different quality attributes, to investigate what causes low quality of results and slow task execution in microtask crowdsourcing, to identify effective methods to both assess and assure that these quality attributes are of high level. We conducted the most extensive literature review analysis of quality attributes, assessment and assurance techniques ever done in the area of microtasking and crowdsourcing in general. We further advanced the state of the art in three research tracks: i) Improving accuracy and execution speed (the major track), where we monitor in-page user activity of each individual worker, automatically predict abandoned assignments causing delays and assignments with low quality of results, and relaunch them to other workers using our tool ReLauncher; ii) Crowdsourcing complex processes, where we introduce BPMN-extensions to design business processes of both crowd and machine tasks, and the crowdsourcing platform Crowd Computer to deploy these tasks; and iii) Improving workers user experience, where we identify problems workers face searching for tasks to work on, address these problems in our prototype of the task listing interface and introduce a new mobile crowdsourcing platform, CrowdCafe, designed in a way to optimize task searching time and to motivate workers with tangible rewards, such as a coffee.
2016
XXVIII
2015-2016
Ingegneria e scienza dell'Informaz (29/10/12-)
Information and Communication Technology
Marchese, Maurizio
Daniel, Florian
no
Inglese
Settore INF/01 - Informatica
File in questo prodotto:
File Dimensione Formato  
PhD-Thesis.pdf

accesso aperto

Tipologia: Tesi di dottorato (Doctoral Thesis)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/368933
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact