Semantic segmentation is one of the most challenging tasks for very high-resolution (VHR) remote sensing applications. Deep convolutional neural networks (DCNNs) based on the attention mechanism have shown outstanding performance in VHR remote sensing images semantic segmentation. However, the existing attention-guided methods require the estimation of a large number of parameters that are affected by the limited number of available labeled samples that results in underperforming segmentation results. In this article, we propose a multistage feature fusion lightweight (MSFFL) model to greatly reduce the number of parameters and improve the accuracy of semantic segmentation. In this model, two parallel enhanced attention modules, i.e., the spatial attention module (SAM) and the channel attention module (CAM), are designed by introducing encoding position information. Then, a covariance calculation strategy is adopted to recalibrate the generated attention maps. The integration of enhanced attention modules into the proposed lightweight module results in an efficient lightweight attention network (LiANet). The performance of the proposed LiANet is assessed on two benchmark datasets. Experimental results demonstrate that LiANet can achieve promising performance with a small number of parameters.

Lightweight Attention Network for Very High-Resolution Image Semantic Segmentation / Guan, Renchu; Wang, Mingming; Bruzzone, Lorenzo; Zhao, Haishi; Yang, Chen. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 61:(2023), pp. 440351401-440351414. [10.1109/TGRS.2023.3272614]

Lightweight Attention Network for Very High-Resolution Image Semantic Segmentation

Bruzzone, Lorenzo;
2023-01-01

Abstract

Semantic segmentation is one of the most challenging tasks for very high-resolution (VHR) remote sensing applications. Deep convolutional neural networks (DCNNs) based on the attention mechanism have shown outstanding performance in VHR remote sensing images semantic segmentation. However, the existing attention-guided methods require the estimation of a large number of parameters that are affected by the limited number of available labeled samples that results in underperforming segmentation results. In this article, we propose a multistage feature fusion lightweight (MSFFL) model to greatly reduce the number of parameters and improve the accuracy of semantic segmentation. In this model, two parallel enhanced attention modules, i.e., the spatial attention module (SAM) and the channel attention module (CAM), are designed by introducing encoding position information. Then, a covariance calculation strategy is adopted to recalibrate the generated attention maps. The integration of enhanced attention modules into the proposed lightweight module results in an efficient lightweight attention network (LiANet). The performance of the proposed LiANet is assessed on two benchmark datasets. Experimental results demonstrate that LiANet can achieve promising performance with a small number of parameters.
2023
Guan, Renchu; Wang, Mingming; Bruzzone, Lorenzo; Zhao, Haishi; Yang, Chen
Lightweight Attention Network for Very High-Resolution Image Semantic Segmentation / Guan, Renchu; Wang, Mingming; Bruzzone, Lorenzo; Zhao, Haishi; Yang, Chen. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 61:(2023), pp. 440351401-440351414. [10.1109/TGRS.2023.3272614]
File in questo prodotto:
File Dimensione Formato  
TGRS3272614.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 638.45 kB
Formato Adobe PDF
638.45 kB Adobe PDF Visualizza/Apri
Lightweight_Attention_Network_for_Very_High-Resolution_Image_Semantic_Segmentation.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.77 MB
Formato Adobe PDF
1.77 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/401461
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 7
social impact