Objectives: Text classification is a recurrent goal in machine learning projects and a typical task in crowdsourcing platforms. Hybrid approaches, leveraging crowdsourcing and machine learning, work better than either in isolation and help to reduce crowdsourcing costs. One way to mix crowd and machine efforts is to have algorithms highlight passages from texts and feed these to the crowd for classification. In this paper, we present a dataset to study text highlighting generation and its impact on document classification. Data description: The dataset was created through two series of experiments where we first asked workers to (i) classify documents according to a relevance question and to highlight parts of the text that supported their decision, and on a second phase, (ii) to assess document relevance but supported by text highlighting of varying quality (six human-generated and six machine-generated highlighting conditions). The dataset features documents from two application domains: systematic literature reviews and product reviews, three document sizes, and three relevance questions of different levels of difficulty. We expect this dataset of 27,711 individual judgments from 1851 workers to benefit not only this specific problem domain, but the larger class of classification problems where crowdsourced datasets with individual judgments are scarce.

Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks / Ramirez Medina, Jorge Daniel; Baez, M.; Casati, F.; Benatallah, B.. - In: BMC RESEARCH NOTES. - ISSN 1756-0500. - 12:1(2019), p. 820. [10.1186/s13104-019-4858-z]

Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks

Ramirez Medina, Jorge Daniel;Baez M.;Casati F.;Benatallah B.
2019-01-01

Abstract

Objectives: Text classification is a recurrent goal in machine learning projects and a typical task in crowdsourcing platforms. Hybrid approaches, leveraging crowdsourcing and machine learning, work better than either in isolation and help to reduce crowdsourcing costs. One way to mix crowd and machine efforts is to have algorithms highlight passages from texts and feed these to the crowd for classification. In this paper, we present a dataset to study text highlighting generation and its impact on document classification. Data description: The dataset was created through two series of experiments where we first asked workers to (i) classify documents according to a relevance question and to highlight parts of the text that supported their decision, and on a second phase, (ii) to assess document relevance but supported by text highlighting of varying quality (six human-generated and six machine-generated highlighting conditions). The dataset features documents from two application domains: systematic literature reviews and product reviews, three document sizes, and three relevance questions of different levels of difficulty. We expect this dataset of 27,711 individual judgments from 1851 workers to benefit not only this specific problem domain, but the larger class of classification problems where crowdsourced datasets with individual judgments are scarce.
2019
1
Ramirez Medina, Jorge Daniel; Baez, M.; Casati, F.; Benatallah, B.
Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks / Ramirez Medina, Jorge Daniel; Baez, M.; Casati, F.; Benatallah, B.. - In: BMC RESEARCH NOTES. - ISSN 1756-0500. - 12:1(2019), p. 820. [10.1186/s13104-019-4858-z]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/252050
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact