ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human language and is used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in relation to providing inappropriate or harmful safety-related information. To explore ChatGPT's (specifically version 3.5) capabilities in providing safety-related advice, a multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: using mobile phones while driving, supervising children around water, crowd management guidelines, precautions to prevent falls in older people, air pollution when exercising, intervening when a colleague is distressed, managing job demands to prevent burnout, protecting personal data in fitness apps, and fatigue when operating heavy machinery. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.

The risks of using ChatGPT to obtain common safety-related information and advice / Oviedo-Trespalacios, Oscar; Peden, Amy E; Cole-Hunter, Thomas; Costantini, Arianna; Haghani, Milad; Rod, J. E.; Kelly, Sage; Torkamaan, Helma; Tariq, Amina; David Albert Newton, James; Gallagher, Timothy; Steinert, Steffen; Filtness, Ashleigh J.; Reniers, Genserik. - In: SAFETY SCIENCE. - ISSN 0925-7535. - 167:(2023), p. 106244. [10.1016/j.ssci.2023.106244]

The risks of using ChatGPT to obtain common safety-related information and advice

Costantini, Arianna;
2023-01-01

Abstract

ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human language and is used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in relation to providing inappropriate or harmful safety-related information. To explore ChatGPT's (specifically version 3.5) capabilities in providing safety-related advice, a multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: using mobile phones while driving, supervising children around water, crowd management guidelines, precautions to prevent falls in older people, air pollution when exercising, intervening when a colleague is distressed, managing job demands to prevent burnout, protecting personal data in fitness apps, and fatigue when operating heavy machinery. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.
2023
Oviedo-Trespalacios, Oscar; Peden, Amy E; Cole-Hunter, Thomas; Costantini, Arianna; Haghani, Milad; Rod, J. E.; Kelly, Sage; Torkamaan, Helma; Tariq, Amina; David Albert Newton, James; Gallagher, Timothy; Steinert, Steffen; Filtness, Ashleigh J.; Reniers, Genserik
The risks of using ChatGPT to obtain common safety-related information and advice / Oviedo-Trespalacios, Oscar; Peden, Amy E; Cole-Hunter, Thomas; Costantini, Arianna; Haghani, Milad; Rod, J. E.; Kelly, Sage; Torkamaan, Helma; Tariq, Amina; David Albert Newton, James; Gallagher, Timothy; Steinert, Steffen; Filtness, Ashleigh J.; Reniers, Genserik. - In: SAFETY SCIENCE. - ISSN 0925-7535. - 167:(2023), p. 106244. [10.1016/j.ssci.2023.106244]
File in questo prodotto:
File Dimensione Formato  
Oviedo-Trespalacios+et+al.+(2023)..._100dpi_75%.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 8.54 MB
Formato Adobe PDF
8.54 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/385269
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 7
social impact