Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.

Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence / Garigliano, Roberto; Mich, Luisa. - STAMPA. - 11865:(2019), pp. 250-266. [10.1007/978-3-030-30985-5_16]

Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

Garigliano, Roberto;Mich, Luisa
2019-01-01

Abstract

Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.
2019
From Software Engineering to Formal Methods and Tools, and Back
Cham
Springer
978-3-030-30984-8
Garigliano, Roberto; Mich, Luisa
Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence / Garigliano, Roberto; Mich, Luisa. - STAMPA. - 11865:(2019), pp. 250-266. [10.1007/978-3-030-30985-5_16]
File in questo prodotto:
File Dimensione Formato  
Garigliano Mich Core semantics for AI Accountability.pdf

Solo gestori archivio

Descrizione: Capitolo del libro
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 744.48 kB
Formato Adobe PDF
744.48 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/243752
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact