Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.
Explainability in Predictive Process Monitoring: When Understanding Helps Improving / Rizzi, Williams; Di Francescomarino, Chiara; Maria Maggi, Fabrizio. - 392:(2020), pp. 141-158. (Intervento presentato al convegno BPM 2020: Business Process Management Forum tenutosi a Seville, Spain nel 13-18 September 2020) [10.1007/978-3-030-58638-6_9].
Explainability in Predictive Process Monitoring: When Understanding Helps Improving
Williams Rizzi;Chiara Di Francescomarino;
2020-01-01
Abstract
Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione