A discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.

When Doctors and AI Interact: on Human Responsibility for Artificial Risks / Verdicchio, Maro; Perin, Andrea. - In: PHILOSOPHY & TECHNOLOGY. - ISSN 2210-5433. - 2022, 35:1(2022), pp. 1-28. [10.1007/s13347-022-00506-6]

When Doctors and AI Interact: on Human Responsibility for Artificial Risks

Perin, Andrea
Ultimo
2022-01-01

Abstract

A discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.
2022
1
Settore IUS/17 - Diritto Penale
Settore IUS/20 - Filosofia del Diritto
Settore INF/01 - Informatica
Settore GIUR-14/A - Diritto penale
Settore GIUR-17/A - Filosofia del diritto
Settore PHIL-03/A - Filosofia morale
Verdicchio, Maro; Perin, Andrea
When Doctors and AI Interact: on Human Responsibility for Artificial Risks / Verdicchio, Maro; Perin, Andrea. - In: PHILOSOPHY & TECHNOLOGY. - ISSN 2210-5433. - 2022, 35:1(2022), pp. 1-28. [10.1007/s13347-022-00506-6]
File in questo prodotto:
File Dimensione Formato  
Perin_2022_When Doctors And AI Interact_P&T(co-Verdicchio)_Phil&Tech.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 777.99 kB
Formato Adobe PDF
777.99 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/468692
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 37
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 43
social impact