Today more and more algorithms and their applications are entering into the everyday life of each of us. Algorithms can help people to make more effective choices through historical data analysis, generating predictions to present to the user in the form of advice and suggestions. Given the increasing popularity of these suggestions, a greater understanding of how people could increase their judgment through the suggestions presented is needed, in order to improve the interface design of these applications. Since the envision of Artificial Intelligence (AI), technical progress has the intent of surpassing human performance and abilities (Crandall et al., 2018). Less consideration has been given to improve cooperative relationships between human agents and computer agents during decision tasks. No study up to date has investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system. According to Zeelenberg et al. (Martinez & Zeelenberg, 2015; Martinez, Zeelenberg, & Rijsman, 2011a; Zeelenberg & Pieters, 1999), there are two emotions strongly related to wrong decisions, regret, and disappointment. The objective of this research is to understand the different effects of disappointment and regret on participants’ behavioral responses to failed suggestions given by algorithm-based systems. The research investigates how people deal with a computer suggestion that brings to a not satisfying result, compared to a human suggestion. To achieve this purpose, three different scenarios were tested in three different experiments. In the first experiment, the comparison was amongst two wrong suggestions in a between-subjects design through the presentation of a flight ticket scenario with two tasks. The first study analyzed exploratory models that explain the involvement of the source of suggestion and the trust in the systems in the experience of counterfactual emotions and responsibility attribution. The second experiment takes advantage of a typical purchase scenario, already used in the psychological literature, which had the aim to solve the issues found in the first study and test the algorithm aversion paradigm through the lenses of a classic study of regret literature. Results showed that, contrary to early predictions, people blame more the source of the suggestion when it comes from a human as compared with an intelligent computer suggestion. The third study had the aim to understand the role of counterfactuals through a paradigmatic experiment from algorithm aversion literature. In this study, the main finding is about the reliance people have on the algorithmic suggestion, which is higher compared to the reliance they have with a human suggestion. Nevertheless, people felt more guilt when they had a wrong outcome with a computer compared with a suggestion given by a person. Results are relevant in order to better understand how people decide and trust algorithm-based systems after a wrong outcome. This thesis is the first attempt to understand this algorithm aversion from the experienced counterfactual emotions and their different behavioral consequences. However, some of these findings showed contradictory results in the three experiments; this could be due to the different scenarios and participants’ thoughts and perceptions of artificial intelligence-based systems. From this work, three suggestions can be inferred to help designers of intelligent systems. The first regards the effective involvement of counterfactuals during the user interaction with a wrong outcome and the potential behavioral consequences that could affect the future use of the intelligent system. The second suggestion is the contribution to the importance of the context in which decisions are made, and the third guideline suggests the designer rethink about anthropomorphism as the best practice to present suggestions in the occurrence of potential wrong outcomes. Future works will investigate, in a more detailed way the perceptions of users and test different scenarios and decision domains.

Impact of counterfactual emotions on the experience of algorithm aversion / Beretta, Andrea. - (2020 Feb 13), pp. 1-107. [10.15168/11572_252452]

Impact of counterfactual emotions on the experience of algorithm aversion

Beretta, Andrea
2020-02-13

Abstract

Today more and more algorithms and their applications are entering into the everyday life of each of us. Algorithms can help people to make more effective choices through historical data analysis, generating predictions to present to the user in the form of advice and suggestions. Given the increasing popularity of these suggestions, a greater understanding of how people could increase their judgment through the suggestions presented is needed, in order to improve the interface design of these applications. Since the envision of Artificial Intelligence (AI), technical progress has the intent of surpassing human performance and abilities (Crandall et al., 2018). Less consideration has been given to improve cooperative relationships between human agents and computer agents during decision tasks. No study up to date has investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system. According to Zeelenberg et al. (Martinez & Zeelenberg, 2015; Martinez, Zeelenberg, & Rijsman, 2011a; Zeelenberg & Pieters, 1999), there are two emotions strongly related to wrong decisions, regret, and disappointment. The objective of this research is to understand the different effects of disappointment and regret on participants’ behavioral responses to failed suggestions given by algorithm-based systems. The research investigates how people deal with a computer suggestion that brings to a not satisfying result, compared to a human suggestion. To achieve this purpose, three different scenarios were tested in three different experiments. In the first experiment, the comparison was amongst two wrong suggestions in a between-subjects design through the presentation of a flight ticket scenario with two tasks. The first study analyzed exploratory models that explain the involvement of the source of suggestion and the trust in the systems in the experience of counterfactual emotions and responsibility attribution. The second experiment takes advantage of a typical purchase scenario, already used in the psychological literature, which had the aim to solve the issues found in the first study and test the algorithm aversion paradigm through the lenses of a classic study of regret literature. Results showed that, contrary to early predictions, people blame more the source of the suggestion when it comes from a human as compared with an intelligent computer suggestion. The third study had the aim to understand the role of counterfactuals through a paradigmatic experiment from algorithm aversion literature. In this study, the main finding is about the reliance people have on the algorithmic suggestion, which is higher compared to the reliance they have with a human suggestion. Nevertheless, people felt more guilt when they had a wrong outcome with a computer compared with a suggestion given by a person. Results are relevant in order to better understand how people decide and trust algorithm-based systems after a wrong outcome. This thesis is the first attempt to understand this algorithm aversion from the experienced counterfactual emotions and their different behavioral consequences. However, some of these findings showed contradictory results in the three experiments; this could be due to the different scenarios and participants’ thoughts and perceptions of artificial intelligence-based systems. From this work, three suggestions can be inferred to help designers of intelligent systems. The first regards the effective involvement of counterfactuals during the user interaction with a wrong outcome and the potential behavioral consequences that could affect the future use of the intelligent system. The second suggestion is the contribution to the importance of the context in which decisions are made, and the third guideline suggests the designer rethink about anthropomorphism as the best practice to present suggestions in the occurrence of potential wrong outcomes. Future works will investigate, in a more detailed way the perceptions of users and test different scenarios and decision domains.
13-feb-2020
XXXII
2018-2019
Psicologia e scienze cognitive (29/10/12-)
Cognitive Science
Zancanaro, Massimo
Lepri, Bruno
no
Inglese
Settore M-PSI/05 - Psicologia Sociale
File in questo prodotto:
File Dimensione Formato  
phd_unitn_andrea_beretta.pdf

accesso aperto

Tipologia: Tesi di dottorato (Doctoral Thesis)
Licenza: Creative commons
Dimensione 3.46 MB
Formato Adobe PDF
3.46 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/252452
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact