Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.

Explainability in Predictive Process Monitoring: When Understanding Helps Improving / Rizzi, Williams; Di Francescomarino, Chiara; Maria Maggi, Fabrizio. - 392:(2020), pp. 141-158. (Intervento presentato al convegno BPM 2020: Business Process Management Forum tenutosi a Seville, Spain nel 13-18 September 2020) [10.1007/978-3-030-58638-6_9].

Explainability in Predictive Process Monitoring: When Understanding Helps Improving

Williams Rizzi;Chiara Di Francescomarino;
2020-01-01

Abstract

Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.
2020
Proceedings of International Conference on Business Process Management
978-3-030-58637-9
Rizzi, Williams; Di Francescomarino, Chiara; Maria Maggi, Fabrizio
Explainability in Predictive Process Monitoring: When Understanding Helps Improving / Rizzi, Williams; Di Francescomarino, Chiara; Maria Maggi, Fabrizio. - 392:(2020), pp. 141-158. (Intervento presentato al convegno BPM 2020: Business Process Management Forum tenutosi a Seville, Spain nel 13-18 September 2020) [10.1007/978-3-030-58638-6_9].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/362623
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 25
  • OpenAlex ND
social impact