We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look "suspicious" to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose CINCER, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, CINCER identifies a counter-example in the training set that - according to the model - is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model's suspicion, and highly influential, so to convey as much information as possible if relabeled. CINCER achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps in acquiring substantially better data and models, especially when paired with our FIM approximation.
Interactive label cleaning with example-based explanations / Teso, Stefano; Bontempelli, Andrea; Giunchiglia, Fausto; Passerini, Andrea. - 34:(2021), pp. 12966-12977. (Intervento presentato al convegno 35th Conference on Neural Information Processing Systems, NeurIPS 2021 tenutosi a online nel 6th-14th December 2021).
Interactive label cleaning with example-based explanations
Teso Stefano;Bontempelli Andrea;Giunchiglia Fausto;Passerini, Andrea
2021-01-01
Abstract
We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look "suspicious" to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose CINCER, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, CINCER identifies a counter-example in the training set that - according to the model - is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model's suspicion, and highly influential, so to convey as much information as possible if relabeled. CINCER achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps in acquiring substantially better data and models, especially when paired with our FIM approximation.File | Dimensione | Formato | |
---|---|---|---|
interactive_label_cleaning_wit.pdf
accesso aperto
Descrizione: Main article
Tipologia:
Post-print referato (Refereed author’s manuscript)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
498.19 kB
Formato
Adobe PDF
|
498.19 kB | Adobe PDF | Visualizza/Apri |
interactive_label_cleaning_wit-Supplementary Material.pdf
accesso aperto
Descrizione: Supplementary material
Tipologia:
Altro materiale allegato (Other attachments)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
696.11 kB
Formato
Adobe PDF
|
696.11 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione