In this paper we address multi-target domain adaptation (MTDA), where given one labeled source dataset and multiple unlabeled target datasets that differ in data distributions, the task is to learn a robust predictor for all the target domains. We identify two key aspects that can help to alleviate multiple domain-shifts in the MTDA: feature aggregation and curriculum learning. To this end, we propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains. To prevent the classifiers from over-fitting on its own noisy pseudo-labels we develop a co-teaching strategy with the dual classifier head that is assisted by curriculum learning to obtain more reliable pseudo-labels. Furthermore, when the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones. We experimentally demonstrate the effectiveness of our proposed frameworks on several benchmarks and advance the state-of-the-art in the MTDA by large margins (e.g. +5.6% on the DomainNet).

Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation / Roy, Subhankar; Krivosheev, Evgeny; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa. - (2021), pp. 5347-5356. (Intervento presentato al convegno 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 tenutosi a virtual conference nel 20th-25th June 2021) [10.1109/CVPR46437.2021.00531].

Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation

Roy, Subhankar;Krivosheev, Evgeny;Zhong, Zhun;Sebe, Nicu;Ricci, Elisa
2021-01-01

Abstract

In this paper we address multi-target domain adaptation (MTDA), where given one labeled source dataset and multiple unlabeled target datasets that differ in data distributions, the task is to learn a robust predictor for all the target domains. We identify two key aspects that can help to alleviate multiple domain-shifts in the MTDA: feature aggregation and curriculum learning. To this end, we propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains. To prevent the classifiers from over-fitting on its own noisy pseudo-labels we develop a co-teaching strategy with the dual classifier head that is assisted by curriculum learning to obtain more reliable pseudo-labels. Furthermore, when the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones. We experimentally demonstrate the effectiveness of our proposed frameworks on several benchmarks and advance the state-of-the-art in the MTDA by large margins (e.g. +5.6% on the DomainNet).
2021
Proceedings: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Piscataway, NJ
IEEE
978-1-6654-4509-2
Roy, Subhankar; Krivosheev, Evgeny; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa
Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation / Roy, Subhankar; Krivosheev, Evgeny; Zhong, Zhun; Sebe, Nicu; Ricci, Elisa. - (2021), pp. 5347-5356. (Intervento presentato al convegno 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 tenutosi a virtual conference nel 20th-25th June 2021) [10.1109/CVPR46437.2021.00531].
File in questo prodotto:
File Dimensione Formato  
RoyCVPR21.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.53 MB
Formato Adobe PDF
7.53 MB Adobe PDF Visualizza/Apri
Curriculum_Graph_Co-Teaching_for_Multi-Target_Domain_Adaptation.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.69 MB
Formato Adobe PDF
1.69 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/326192
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 51
  • ???jsp.display-item.citation.isi??? 42
  • OpenAlex ND
social impact