In this paper, we argue that the prevailing approach to training and evaluating machine learning models often fails to consider their real-world application within organizational or societal contexts, where they are intended to create beneficial value for people. We propose a shift in perspective, redefining model assessment and selection to emphasize integration into workflows that combine machine predictions with human expertise, particularly in scenarios equiring human intervention for low-confidence predictions. Traditional metrics like accuracy and f-score fail to capture the beneficial value of models in such hybrid settings. To address this, we introduce a simple yet theoretically sound “value” metric that incorporates task-specific costs for correct predictions, errors, and rejections, offering a practical ramework for real-world evaluation. Through extensive experiments, we show that existing metrics fail to capture real-world needs, often leading to suboptimal choices in terms of value when used to rank classifiers. Furthermore, we emphasize the critical role of calibration in determining model value, showing that simple, well-calibrated models can often outperform more complex models that are challenging to calibrate.

Rethinking and Recomputing the Value of Machine Learning Models / Sayin, Burcu; Yang, Jie; Chen, Xinyue; Passerini, Andrea; Casati, Fabio. - In: ARTIFICIAL INTELLIGENCE REVIEW. - ISSN 0269-2821. - ELETTRONICO. - In Press:(2025).

Rethinking and Recomputing the Value of Machine Learning Models

Burcu Sayin
;
Andrea Passerini;Fabio Casati
2025-01-01

Abstract

In this paper, we argue that the prevailing approach to training and evaluating machine learning models often fails to consider their real-world application within organizational or societal contexts, where they are intended to create beneficial value for people. We propose a shift in perspective, redefining model assessment and selection to emphasize integration into workflows that combine machine predictions with human expertise, particularly in scenarios equiring human intervention for low-confidence predictions. Traditional metrics like accuracy and f-score fail to capture the beneficial value of models in such hybrid settings. To address this, we introduce a simple yet theoretically sound “value” metric that incorporates task-specific costs for correct predictions, errors, and rejections, offering a practical ramework for real-world evaluation. Through extensive experiments, we show that existing metrics fail to capture real-world needs, often leading to suboptimal choices in terms of value when used to rank classifiers. Furthermore, we emphasize the critical role of calibration in determining model value, showing that simple, well-calibrated models can often outperform more complex models that are challenging to calibrate.
2025
Sayin, Burcu; Yang, Jie; Chen, Xinyue; Passerini, Andrea; Casati, Fabio
Rethinking and Recomputing the Value of Machine Learning Models / Sayin, Burcu; Yang, Jie; Chen, Xinyue; Passerini, Andrea; Casati, Fabio. - In: ARTIFICIAL INTELLIGENCE REVIEW. - ISSN 0269-2821. - ELETTRONICO. - In Press:(2025).
File in questo prodotto:
File Dimensione Formato  
2209.15157.pdf

accesso aperto

Descrizione: Rethinking and Recomputing the Value of ML Models (2022)
Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Creative commons
Dimensione 4.92 MB
Formato Adobe PDF
4.92 MB Adobe PDF Visualizza/Apri
Springer_Nature_Rethinking_and_Recomputing_the_Value_of_ML_Models-1.pdf

accesso aperto

Descrizione: Accepted Manuscript
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Creative commons
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/383671
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact