The ability to learn from human supervision is fundamental for personal assistants and other interactive applications of AI. Two central challenges for deploying interactive learners in the wild are the unreliable nature of the supervision and the varying complexity of the prediction task. We address a simple but representative setting, incremental classification in the wild, where the supervision is noisy and the number of classes grows over time. In order to tackle this task, we propose a redesign of skeptical learning centered around Gaussian Processes (GPs). Skeptical learning is a recent interactive strategy in which, if the machine is sufficiently confident that an example is mislabeled, it asks the annotator to reconsider her feedback. In many cases, this is often enough to obtain clean supervision. Our redesign, dubbed ISGP, leverages the uncertainty estimates supplied by GPs to better allocate labeling and contradiction queries, especially in the presence of noise. Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed.

Learning in the Wild with Incremental Skeptical Gaussian Processes / Bontempelli, Andrea; Teso, Stefano; Giunchiglia, Fausto; Passerini, Andrea. - (2020), pp. 2886-2892. (Intervento presentato al convegno IJCAI tenutosi a Kyoto, Japan nel January 5-10, 2021) [10.24963/ijcai.2020/399].

Learning in the Wild with Incremental Skeptical Gaussian Processes

Andrea Bontempelli;Stefano Teso;Fausto Giunchiglia;Andrea Passerini
2020-01-01

Abstract

The ability to learn from human supervision is fundamental for personal assistants and other interactive applications of AI. Two central challenges for deploying interactive learners in the wild are the unreliable nature of the supervision and the varying complexity of the prediction task. We address a simple but representative setting, incremental classification in the wild, where the supervision is noisy and the number of classes grows over time. In order to tackle this task, we propose a redesign of skeptical learning centered around Gaussian Processes (GPs). Skeptical learning is a recent interactive strategy in which, if the machine is sufficiently confident that an example is mislabeled, it asks the annotator to reconsider her feedback. In many cases, this is often enough to obtain clean supervision. Our redesign, dubbed ISGP, leverages the uncertainty estimates supplied by GPs to better allocate labeling and contradiction queries, especially in the presence of noise. Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed.
2020
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Darmstadt Germany
IJCAI
978-0-9992411-6-5
Bontempelli, Andrea; Teso, Stefano; Giunchiglia, Fausto; Passerini, Andrea
Learning in the Wild with Incremental Skeptical Gaussian Processes / Bontempelli, Andrea; Teso, Stefano; Giunchiglia, Fausto; Passerini, Andrea. - (2020), pp. 2886-2892. (Intervento presentato al convegno IJCAI tenutosi a Kyoto, Japan nel January 5-10, 2021) [10.24963/ijcai.2020/399].
File in questo prodotto:
File Dimensione Formato  
0399.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.66 MB
Formato Adobe PDF
1.66 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/262933
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 0
social impact