There are limited studies on the semantic segmentation of high-resolution polarimetric synthetic aperture radar (PolSAR) images due to the scarcity of training data and the complexity of managing speckle noise. The Gaofen contest has provided open access a high-quality PolSAR semantic segmentation dataset. Taking this opportunity, we propose a multipath residual network (MP-ResNet) architecture for the semantic segmentation of high-resolution PolSAR images. Compared to conventional U-shape encoder-decoder convolutional neural network (CNN) architectures, the MP-ResNet learns semantic context with its parallel multiscale branches, which greatly enlarges its valid receptive fields and improves the embedding of local discriminative features. In addition, MP-ResNet adopts a multilevel feature fusion design in its decoder to effectively exploit the features learned from its different branches. Comparisons with the baseline method of fully connected network (FCN with ResNet34) show that the MP-ResNet has achieved significant accuracy improvements. It also surpasses several state-of-the-art methods in terms of overall accuracy (OA), mF₁ and frequency weighted intersection over union (fwIoU), with only a limited increase of computational costs. This CNN architecture can be used as a baseline method for future studies on the semantic segmentation of PolSAR images. The code is available at: https://github.com/ggsDing/SARSeg.
MP-ResNet: Multipath Residual Network for the Semantic Segmentation of High-Resolution PolSAR Images / Ding, Lei; Zheng, Kai; Lin, Dong; Chen, Yuxing; Liu, Bing; Li, Jiansheng; Bruzzone, Lorenzo. - In: IEEE GEOSCIENCE AND REMOTE SENSING LETTERS. - ISSN 1545-598X. - 2021/19:(2022), pp. 40142051-40142055. [10.1109/LGRS.2021.3079925]
MP-ResNet: Multipath Residual Network for the Semantic Segmentation of High-Resolution PolSAR Images
Ding, Lei;Chen, Yuxing;Bruzzone, Lorenzo
2022-01-01
Abstract
There are limited studies on the semantic segmentation of high-resolution polarimetric synthetic aperture radar (PolSAR) images due to the scarcity of training data and the complexity of managing speckle noise. The Gaofen contest has provided open access a high-quality PolSAR semantic segmentation dataset. Taking this opportunity, we propose a multipath residual network (MP-ResNet) architecture for the semantic segmentation of high-resolution PolSAR images. Compared to conventional U-shape encoder-decoder convolutional neural network (CNN) architectures, the MP-ResNet learns semantic context with its parallel multiscale branches, which greatly enlarges its valid receptive fields and improves the embedding of local discriminative features. In addition, MP-ResNet adopts a multilevel feature fusion design in its decoder to effectively exploit the features learned from its different branches. Comparisons with the baseline method of fully connected network (FCN with ResNet34) show that the MP-ResNet has achieved significant accuracy improvements. It also surpasses several state-of-the-art methods in terms of overall accuracy (OA), mF₁ and frequency weighted intersection over union (fwIoU), with only a limited increase of computational costs. This CNN architecture can be used as a baseline method for future studies on the semantic segmentation of PolSAR images. The code is available at: https://github.com/ggsDing/SARSeg.File | Dimensione | Formato | |
---|---|---|---|
MP-ResNet_Multipath Residual Network for the Semantic Segmentation of High-Resolution.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.64 MB
Formato
Adobe PDF
|
2.64 MB | Adobe PDF | Visualizza/Apri |
MP-ResNet_Multipath_Residual_Network_for_the_Semantic_Segmentation_of_High-Resolution_PolSAR_Images.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.47 MB
Formato
Adobe PDF
|
2.47 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione