Hate speech moderation remains a challenging task for social media platforms. Human-AI collaborative systems offer the potential to combine the strengths of humans’ reliability and the scalability of machine learning to tackle this issue effectively. While methods for task handover in human-AI collaboration exist that consider the costs of incorrect predictions, insufficient attention has been paid to accurately estimating these costs. In this work, we propose a valuesensitive rejection mechanism that automatically rejects machine decisions for human moderation based on users’ value perceptions regarding machine decisions. We conduct a crowdsourced survey study with 160 participants to evaluate their perception of correct and incorrect machine decisions in the domain of hate speech detection, as well as occurrences where the system rejects making a prediction. Here, we introduce Magnitude Estimation, an unbounded scale, as the preferred method for measuring user (dis)agreement with machine decisions. Our results show that Magnitude Estimation can provide a reliable measurement of participants’ perception of machine decisions. By integrating user-perceived value into human-AI collaboration, we further show that it can guide us in 1) determining when to accept or reject machine decisions to obtain the optimal total value a model can deliver and 2) selecting better classification models as compared to the more widely used target of model accuracy.

How do you feel? Measuring User-Perceived Value for Rejecting Machine Decisions in Hate Speech Detection / Lammerts, P.; Lippmann, P.; Hsu, Y. -C.; Casati, F.; Yang, J.. - (2023), pp. 834-844. (Intervento presentato al convegno AIES tenutosi a Montreal, QC Canada nel 8-10 August 2023) [10.1145/3600211.3604655].

How do you feel? Measuring User-Perceived Value for Rejecting Machine Decisions in Hate Speech Detection

Casati, F.;Yang, J.
2023-01-01

Abstract

Hate speech moderation remains a challenging task for social media platforms. Human-AI collaborative systems offer the potential to combine the strengths of humans’ reliability and the scalability of machine learning to tackle this issue effectively. While methods for task handover in human-AI collaboration exist that consider the costs of incorrect predictions, insufficient attention has been paid to accurately estimating these costs. In this work, we propose a valuesensitive rejection mechanism that automatically rejects machine decisions for human moderation based on users’ value perceptions regarding machine decisions. We conduct a crowdsourced survey study with 160 participants to evaluate their perception of correct and incorrect machine decisions in the domain of hate speech detection, as well as occurrences where the system rejects making a prediction. Here, we introduce Magnitude Estimation, an unbounded scale, as the preferred method for measuring user (dis)agreement with machine decisions. Our results show that Magnitude Estimation can provide a reliable measurement of participants’ perception of machine decisions. By integrating user-perceived value into human-AI collaboration, we further show that it can guide us in 1) determining when to accept or reject machine decisions to obtain the optimal total value a model can deliver and 2) selecting better classification models as compared to the more widely used target of model accuracy.
2023
AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
New York NY United States
Association for Computing MachineryNew York, United States
9798400702310
Lammerts, P.; Lippmann, P.; Hsu, Y. -C.; Casati, F.; Yang, J.
How do you feel? Measuring User-Perceived Value for Rejecting Machine Decisions in Hate Speech Detection / Lammerts, P.; Lippmann, P.; Hsu, Y. -C.; Casati, F.; Yang, J.. - (2023), pp. 834-844. (Intervento presentato al convegno AIES tenutosi a Montreal, QC Canada nel 8-10 August 2023) [10.1145/3600211.3604655].
File in questo prodotto:
File Dimensione Formato  
how do you feel.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 798.62 kB
Formato Adobe PDF
798.62 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/397739
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact