In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving little room for complex planning and decision-making processes. Indeed, real-time techniques are very efficient in predetermined, constrained, and controlled scenarios. Nevertheless, they lack the necessary flexibility to operate in evolving settings, where the software needs to adapt to the changes of the environment. On the other hand, Intelligent Systems (IS) increasingly adopted Machine Learning (ML) techniques (e.g., subsymbolic predictors such as Neural Networks). The seminal application of those IS started in zero-risk domains producing revolutionary results. However, the ever-increasing exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particular, it proposes to embrace the Real-Time Beliefs Desires Intentions (RT-BDI) framework as an enabler of eXplainable Multi-Agent Systems (XMAS) in time-critical XAI.

In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap / Alzetta, Francesco; Giorgini, Paolo; Najjar, Amro; Schumacher, Michael I.; Calvaresi, Davide. - 12175:(2020), pp. 39-53. (Intervento presentato al convegno 2nd International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2020 tenutosi a Auckland, New Zealand nel 9th-13th May 2020) [10.1007/978-3-030-51924-7_3].

In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap

Alzetta, Francesco;Giorgini, Paolo;
2020-01-01

Abstract

In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving little room for complex planning and decision-making processes. Indeed, real-time techniques are very efficient in predetermined, constrained, and controlled scenarios. Nevertheless, they lack the necessary flexibility to operate in evolving settings, where the software needs to adapt to the changes of the environment. On the other hand, Intelligent Systems (IS) increasingly adopted Machine Learning (ML) techniques (e.g., subsymbolic predictors such as Neural Networks). The seminal application of those IS started in zero-risk domains producing revolutionary results. However, the ever-increasing exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particular, it proposes to embrace the Real-Time Beliefs Desires Intentions (RT-BDI) framework as an enabler of eXplainable Multi-Agent Systems (XMAS) in time-critical XAI.
2020
Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop Revised Selected Papers
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
SPRINGER INTERNATIONAL PUBLISHING AG
978-3-030-51923-0
978-3-030-51924-7
Alzetta, Francesco; Giorgini, Paolo; Najjar, Amro; Schumacher, Michael I.; Calvaresi, Davide
In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap / Alzetta, Francesco; Giorgini, Paolo; Najjar, Amro; Schumacher, Michael I.; Calvaresi, Davide. - 12175:(2020), pp. 39-53. (Intervento presentato al convegno 2nd International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2020 tenutosi a Auckland, New Zealand nel 9th-13th May 2020) [10.1007/978-3-030-51924-7_3].
File in questo prodotto:
File Dimensione Formato  
Alzetta2020_Chapter_In-TimeExplainabilityInMulti-A.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 774.79 kB
Formato Adobe PDF
774.79 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/291779
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 7
  • OpenAlex ND
social impact