Generalizing visual recognition models trained on a single distribution to unseen input distributions (i.e. domains) requires making them robust to superfluous correlations in the training set. In this work, we achieve this goal by altering the training images to simulate new domains and imposing consistent visual attention across the different views of the same sample. We discover that the first objective can be simply and effectively met through visual corruptions. Specifically, we alter the content of the training images using the nineteen corruptions of the ImageNet-C benchmark and three additional transformations based on Fourier transform. Since these corruptions preserve object locations, we propose an attention consistency loss to ensure that class activation maps across original and corrupted versions of the same training sample are aligned. We name our model Attention Consistency on Visual Corruptions (ACVC). We show that ACVC consistently achieves the state of the art on three single-source domain generalization benchmarks, PACS, COCO, and the large-scale DomainNet1.

Attention Consistency on Visual Corruptions for Single-Source Domain Generalization / Cugu, I.; Mancini, M.; Chen, Y.; Akata, Z.. - 2022-:(2022), pp. 4164-4173. (Intervento presentato al convegno 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022 tenutosi a usa nel 2022) [10.1109/CVPRW56347.2022.00461].

Attention Consistency on Visual Corruptions for Single-Source Domain Generalization

Mancini M.;
2022-01-01

Abstract

Generalizing visual recognition models trained on a single distribution to unseen input distributions (i.e. domains) requires making them robust to superfluous correlations in the training set. In this work, we achieve this goal by altering the training images to simulate new domains and imposing consistent visual attention across the different views of the same sample. We discover that the first objective can be simply and effectively met through visual corruptions. Specifically, we alter the content of the training images using the nineteen corruptions of the ImageNet-C benchmark and three additional transformations based on Fourier transform. Since these corruptions preserve object locations, we propose an attention consistency loss to ensure that class activation maps across original and corrupted versions of the same training sample are aligned. We name our model Attention Consistency on Visual Corruptions (ACVC). We show that ACVC consistently achieves the state of the art on three single-source domain generalization benchmarks, PACS, COCO, and the large-scale DomainNet1.
2022
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
345 E 47TH ST, NEW YORK, NY 10017 USA
IEEE Computer Society
978-1-6654-8739-9
Cugu, I.; Mancini, M.; Chen, Y.; Akata, Z.
Attention Consistency on Visual Corruptions for Single-Source Domain Generalization / Cugu, I.; Mancini, M.; Chen, Y.; Akata, Z.. - 2022-:(2022), pp. 4164-4173. (Intervento presentato al convegno 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022 tenutosi a usa nel 2022) [10.1109/CVPRW56347.2022.00461].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/437735
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 32
  • ???jsp.display-item.citation.isi??? 16
  • OpenAlex ND
social impact