Discovering novel concepts in unlabelled datasets and in a continuous manner is an important desideratum of lifelong learners. In the literature such problems have been partially addressed under very restricted settings, where novel classes are learned by jointly accessing a related labelled set (e.g., NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD). In this work we challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly, without needing any related labelled set. In detail, we propose to exploit the richer priors from strong self-supervised pre-trained models (PTM). To this end, we propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios. We conduct extensive empirical evaluation on a multitude of benchmarks and show the effectiveness of our proposed baselines when compared with sophisticated state-of-the-art methods. The code is open source.
Large-Scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery / Liu, Mingxuan; Roy, Subhankar; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa. - 15316 LNCS:(2024), pp. 126-142. (Intervento presentato al convegno 27th International Conference on Pattern Recognition, ICPR 2024 tenutosi a Kolkata nel 2024) [10.1007/978-3-031-78444-6_9].
Large-Scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery
Liu, Mingxuan;Roy, Subhankar;Zhong, Zhun;Sebe, Nicu;Ricci, Elisa
2024-01-01
Abstract
Discovering novel concepts in unlabelled datasets and in a continuous manner is an important desideratum of lifelong learners. In the literature such problems have been partially addressed under very restricted settings, where novel classes are learned by jointly accessing a related labelled set (e.g., NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD). In this work we challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly, without needing any related labelled set. In detail, we propose to exploit the richer priors from strong self-supervised pre-trained models (PTM). To this end, we propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios. We conduct extensive empirical evaluation on a multitude of benchmarks and show the effectiveness of our proposed baselines when compared with sophisticated state-of-the-art methods. The code is open source.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione