Discovering novel concepts in unlabelled datasets and in a continuous manner is an important desideratum of lifelong learners. In the literature such problems have been partially addressed under very restricted settings, where novel classes are learned by jointly accessing a related labelled set (e.g., NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD). In this work we challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly, without needing any related labelled set. In detail, we propose to exploit the richer priors from strong self-supervised pre-trained models (PTM). To this end, we propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios. We conduct extensive empirical evaluation on a multitude of benchmarks and show the effectiveness of our proposed baselines when compared with sophisticated state-of-the-art methods. The code is open source.

Large-Scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery / Liu, Mingxuan; Roy, Subhankar; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa. - 15316 LNCS:(2024), pp. 126-142. (Intervento presentato al convegno 27th International Conference on Pattern Recognition, ICPR 2024 tenutosi a Kolkata nel 2024) [10.1007/978-3-031-78444-6_9].

Large-Scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery

Liu, Mingxuan;Roy, Subhankar;Zhong, Zhun;Sebe, Nicu;Ricci, Elisa
2024-01-01

Abstract

Discovering novel concepts in unlabelled datasets and in a continuous manner is an important desideratum of lifelong learners. In the literature such problems have been partially addressed under very restricted settings, where novel classes are learned by jointly accessing a related labelled set (e.g., NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD). In this work we challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly, without needing any related labelled set. In detail, we propose to exploit the richer priors from strong self-supervised pre-trained models (PTM). To this end, we propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios. We conduct extensive empirical evaluation on a multitude of benchmarks and show the effectiveness of our proposed baselines when compared with sophisticated state-of-the-art methods. The code is open source.
2024
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Heidelberg
Springer Science and Business Media Deutschland GmbH
9783031784439
9783031784446
Liu, Mingxuan; Roy, Subhankar; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa
Large-Scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery / Liu, Mingxuan; Roy, Subhankar; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa. - 15316 LNCS:(2024), pp. 126-142. (Intervento presentato al convegno 27th International Conference on Pattern Recognition, ICPR 2024 tenutosi a Kolkata nel 2024) [10.1007/978-3-031-78444-6_9].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/442610
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact