Convolutional neural networks have enabled major progresses in addressing pixel-level prediction tasks such as semantic segmentation, depth estimation, surface normal prediction and so on, benefiting from their powerful capabilities in visual representation learning. Typically, state of the art models integrate attention mechanisms for improved deep feature representations. Recently, some works have demonstrated the significance of learning and combining both spatial- and channel-wise attentions for deep feature refinement. In this paper, we aim at effectively boosting previous approaches and propose a unified deep framework to jointly learn both spatial attention maps and channel attention vectors in a principled manner so as to structure the resulting attention tensors and model interactions between these two types of attentions. Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework, leading to VarIational STructured Attention networks (VISTA-Net). We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters. As demonstrated by our extensive empirical evaluation on six large-scale datasets for dense visual prediction, VISTA-Net outperforms the state-of-the-art in multiple continuous and discrete prediction tasks, thus confirming the benefit of the proposed approach in joint structured spatial-channel attention estimation for deep representation learning. The code is available at https://github.com/ygjwd12345/VISTA-Net.

Variational Structured Attention Networks for Deep Visual Representation Learning / Yang, G.; Rota, P.; Alameda-Pineda, X.; Xu, D.; Ding, M.; Ricci, E.. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1941-0042. - 2022:(2022), pp. 1-1. [10.1109/TIP.2021.3137647]

Variational Structured Attention Networks for Deep Visual Representation Learning

Rota, P.;Xu, D.;Ricci, E.
2022-01-01

Abstract

Convolutional neural networks have enabled major progresses in addressing pixel-level prediction tasks such as semantic segmentation, depth estimation, surface normal prediction and so on, benefiting from their powerful capabilities in visual representation learning. Typically, state of the art models integrate attention mechanisms for improved deep feature representations. Recently, some works have demonstrated the significance of learning and combining both spatial- and channel-wise attentions for deep feature refinement. In this paper, we aim at effectively boosting previous approaches and propose a unified deep framework to jointly learn both spatial attention maps and channel attention vectors in a principled manner so as to structure the resulting attention tensors and model interactions between these two types of attentions. Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework, leading to VarIational STructured Attention networks (VISTA-Net). We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters. As demonstrated by our extensive empirical evaluation on six large-scale datasets for dense visual prediction, VISTA-Net outperforms the state-of-the-art in multiple continuous and discrete prediction tasks, thus confirming the benefit of the proposed approach in joint structured spatial-channel attention estimation for deep representation learning. The code is available at https://github.com/ygjwd12345/VISTA-Net.
2022
Yang, G.; Rota, P.; Alameda-Pineda, X.; Xu, D.; Ding, M.; Ricci, E.
Variational Structured Attention Networks for Deep Visual Representation Learning / Yang, G.; Rota, P.; Alameda-Pineda, X.; Xu, D.; Ding, M.; Ricci, E.. - In: IEEE TRANSACTIONS ON IMAGE PROCESSING. - ISSN 1941-0042. - 2022:(2022), pp. 1-1. [10.1109/TIP.2021.3137647]
File in questo prodotto:
File Dimensione Formato  
2103.03510.pdf

Solo gestori archivio

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 16.71 MB
Formato Adobe PDF
16.71 MB Adobe PDF   Visualizza/Apri
13_PDFsam_Variational_Structured_Attention_Networks_for_Deep_Visual_Representation_Learning.pdf

Solo gestori archivio

Descrizione: pagg. 13-16
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.85 MB
Formato Adobe PDF
3.85 MB Adobe PDF   Visualizza/Apri
7_PDFsam_Variational_Structured_Attention_Networks_for_Deep_Visual_Representation_Learning.pdf

Solo gestori archivio

Descrizione: pagg. 7-12
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 10 MB
Formato Adobe PDF
10 MB Adobe PDF   Visualizza/Apri
1_PDFsam_Variational_Structured_Attention_Networks_for_Deep_Visual_Representation_Learning.pdf

Solo gestori archivio

Descrizione: pagg. 1-6
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 606.78 kB
Formato Adobe PDF
606.78 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/357773
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact