Convolutional Neural Networks (CNNs) have brought revolutionary advances to many research areas due to their capacity of learning from raw data. However, when those methods are applied to non-controllable environments, many different factors can degrade the model's expected performance, such as unlabeled datasets with different levels of domain shift and category shift. Particularly, when both issues occur at the same time, we tackle this challenging setup as Open Set Domain Adaptation (OSDA) problem. In general, existing OSDA approaches focus their efforts only on aligning known classes or, if they already extract possible negative instances, use them as a new category learned with supervision during the course of training. We propose a novel way to improve OSDA approaches by extracting a high-confidence set of unknown instances and using it as a hard constraint to tighten the classification boundaries of OSDA methods. Especially, we adopt a new loss constraint evaluated in three different means, (1) directly with the pristine negative instances; (2) with randomly transformed negatives using data augmentation techniques; and (3) with synthetically generated negatives containing adversarial features. We assessed all approaches in an extensive set of experiments based on OVANet, where we could observe consistent improvements for two public benchmarks, the Office-31 and Office-Home datasets, yielding absolute gains of up to 1.3% for both Accuracy and H-Score on Office-31 and 5.8% for Accuracy and 4.7% for H-Score on Office-Home.

Tightening Classification Boundaries in Open Set Domain Adaptation through Unknown Exploitation / e Silva, Lucas Fernando Alvarenga; Sebe, Nicu; Almeida, Jurandy. - (2023), pp. 157-162. (Intervento presentato al convegno SIBGRAPI tenutosi a Rio Grande, Brazil nel 06-09 November 2023) [10.1109/SIBGRAPI59091.2023.10347139].

Tightening Classification Boundaries in Open Set Domain Adaptation through Unknown Exploitation

Sebe, Nicu;
2023-01-01

Abstract

Convolutional Neural Networks (CNNs) have brought revolutionary advances to many research areas due to their capacity of learning from raw data. However, when those methods are applied to non-controllable environments, many different factors can degrade the model's expected performance, such as unlabeled datasets with different levels of domain shift and category shift. Particularly, when both issues occur at the same time, we tackle this challenging setup as Open Set Domain Adaptation (OSDA) problem. In general, existing OSDA approaches focus their efforts only on aligning known classes or, if they already extract possible negative instances, use them as a new category learned with supervision during the course of training. We propose a novel way to improve OSDA approaches by extracting a high-confidence set of unknown instances and using it as a hard constraint to tighten the classification boundaries of OSDA methods. Especially, we adopt a new loss constraint evaluated in three different means, (1) directly with the pristine negative instances; (2) with randomly transformed negatives using data augmentation techniques; and (3) with synthetically generated negatives containing adversarial features. We assessed all approaches in an extensive set of experiments based on OVANet, where we could observe consistent improvements for two public benchmarks, the Office-31 and Office-Home datasets, yielding absolute gains of up to 1.3% for both Accuracy and H-Score on Office-31 and 5.8% for Accuracy and 4.7% for H-Score on Office-Home.
2023
36th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
Piscataway, NJ USA
IEEE
979-8-3503-3872-0
e Silva, Lucas Fernando Alvarenga; Sebe, Nicu; Almeida, Jurandy
Tightening Classification Boundaries in Open Set Domain Adaptation through Unknown Exploitation / e Silva, Lucas Fernando Alvarenga; Sebe, Nicu; Almeida, Jurandy. - (2023), pp. 157-162. (Intervento presentato al convegno SIBGRAPI tenutosi a Rio Grande, Brazil nel 06-09 November 2023) [10.1109/SIBGRAPI59091.2023.10347139].
File in questo prodotto:
File Dimensione Formato  
Tightening_Classification_Boundaries_in_Open_Set_Domain_Adaptation_through_Unknown_Exploitation.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.67 MB
Formato Adobe PDF
2.67 MB Adobe PDF   Visualizza/Apri
2309.08964.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.72 MB
Formato Adobe PDF
2.72 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400998
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact