Training data creation is increasingly a key bottleneck for developing machine learning, especially for deep learning systems. Active learning provides a cost-effective means for creating training data by selecting the most informative instances for labeling. Labels in real applications are often collected from crowdsourcing, which engages online crowds for data labeling at scale. Despite the importance of using crowdsourced data in the active learning process, an analysis of how the existing active learning approaches behave over crowdsourced data is currently missing. This paper aims to fill this gap by reviewing the existing active learning approaches and then testing a set of benchmarking ones on crowdsourced datasets. We provide a comprehensive and systematic survey of the recent research on active learning in the hybrid human–machine classification setting, where crowd workers contribute labels (often noisy) to either directly classify data instances or to train machine learning models. We identify three categories of state of the art active learning methods according to whether and how predefined queries employed for data sampling, namely fixed-strategy approaches, dynamic-strategy approaches, and strategy-free approaches. We then conduct an empirical study on their cost-effectiveness, showing that the performance of the existing active learning approaches is affected by many factors in hybrid classification contexts, such as the noise level of data, label fusion technique used, and the specific characteristics of the task. Finally, we discuss challenges and identify potential directions to design active learning strategies for hybrid classification problems.

A review and experimental analysis of active learning over crowdsourced data / Sayin, Burcu; Krivosheev, Evgeny; Yang, Jie.; Passerini, Andrea; Casati, Fabio. - In: ARTIFICIAL INTELLIGENCE REVIEW. - ISSN 0269-2821. - ELETTRONICO. - 54:7(2021), pp. 5283-5305. [10.1007/s10462-021-10021-3]

A review and experimental analysis of active learning over crowdsourced data

Sayin, Burcu;Krivosheev, Evgeny;Passerini, Andrea;Casati, Fabio
2021-01-01

Abstract

Training data creation is increasingly a key bottleneck for developing machine learning, especially for deep learning systems. Active learning provides a cost-effective means for creating training data by selecting the most informative instances for labeling. Labels in real applications are often collected from crowdsourcing, which engages online crowds for data labeling at scale. Despite the importance of using crowdsourced data in the active learning process, an analysis of how the existing active learning approaches behave over crowdsourced data is currently missing. This paper aims to fill this gap by reviewing the existing active learning approaches and then testing a set of benchmarking ones on crowdsourced datasets. We provide a comprehensive and systematic survey of the recent research on active learning in the hybrid human–machine classification setting, where crowd workers contribute labels (often noisy) to either directly classify data instances or to train machine learning models. We identify three categories of state of the art active learning methods according to whether and how predefined queries employed for data sampling, namely fixed-strategy approaches, dynamic-strategy approaches, and strategy-free approaches. We then conduct an empirical study on their cost-effectiveness, showing that the performance of the existing active learning approaches is affected by many factors in hybrid classification contexts, such as the noise level of data, label fusion technique used, and the specific characteristics of the task. Finally, we discuss challenges and identify potential directions to design active learning strategies for hybrid classification problems.
2021
7
Sayin, Burcu; Krivosheev, Evgeny; Yang, Jie.; Passerini, Andrea; Casati, Fabio
A review and experimental analysis of active learning over crowdsourced data / Sayin, Burcu; Krivosheev, Evgeny; Yang, Jie.; Passerini, Andrea; Casati, Fabio. - In: ARTIFICIAL INTELLIGENCE REVIEW. - ISSN 0269-2821. - ELETTRONICO. - 54:7(2021), pp. 5283-5305. [10.1007/s10462-021-10021-3]
File in questo prodotto:
File Dimensione Formato  
Sayin2021_Article_AReviewAndExperimentalAnalysis.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Creative commons
Dimensione 1.28 MB
Formato Adobe PDF
1.28 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/349830
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 17
  • OpenAlex ND
social impact