With the increasing number of high-resolution (HR) images captured by various platforms, integrating spectral and spatial properties of data across different HR image types, such as multispectral (MS), hyperspectral (HS), and multitemporal (MT) images, remains a challenging task for object classification. This article proposes a novel hybrid framework named hybrid FusionNet (HFN) that jointly exploits 2-D-3-D convolutional neural networks (CNNs) and a transformer encoder to address a complex classification problem. By incorporating 2-D and 3-D convolutional layers, the proposed HFN generates rich multidimensional hybrid features, including spectral, spatial, and temporal features. These features are then fed into a transformer encoder to learn global saliency and discriminative information, enabling the identification of spatially irregular and spectrally similar objects. The hybrid architecture efficiently captures local intricate spectral-spatial-temporal contextual features through convolutional layers. Then, it learns global long-range dependencies and the spectral dimension through the transformer encoder, thus effectively reducing spectral-spatial mutations, distortions, and variations of ground objects. Experimental results from an high-resolution multispectral (HR-MS) dataset, an high-resolution hyperspectral (HR-HS) dataset, and an high-resolution multitemporal (HR-MT) dataset covering complex urban scenarios confirm the effectiveness of the proposed approach compared to the main state-of-the-art methods. Notably, the proposed HFN can achieve satisfactory classification performance even with limited training samples. The source code will be made available at https://github.com/MissYongjie/Hybrid-FusionNet.

Hybrid FusionNet: A Hybrid Feature Fusion Framework for Multisource High-Resolution Remote Sensing Image Classification / Zheng, Y.; Liu, S.; Chen, H.; Bruzzone, L.. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 62:(2024), pp. 1-14. [10.1109/TGRS.2024.3352812]

Hybrid FusionNet: A Hybrid Feature Fusion Framework for Multisource High-Resolution Remote Sensing Image Classification

Zheng Y.;Liu S.;Chen H.;Bruzzone L.
Ultimo
2024-01-01

Abstract

With the increasing number of high-resolution (HR) images captured by various platforms, integrating spectral and spatial properties of data across different HR image types, such as multispectral (MS), hyperspectral (HS), and multitemporal (MT) images, remains a challenging task for object classification. This article proposes a novel hybrid framework named hybrid FusionNet (HFN) that jointly exploits 2-D-3-D convolutional neural networks (CNNs) and a transformer encoder to address a complex classification problem. By incorporating 2-D and 3-D convolutional layers, the proposed HFN generates rich multidimensional hybrid features, including spectral, spatial, and temporal features. These features are then fed into a transformer encoder to learn global saliency and discriminative information, enabling the identification of spatially irregular and spectrally similar objects. The hybrid architecture efficiently captures local intricate spectral-spatial-temporal contextual features through convolutional layers. Then, it learns global long-range dependencies and the spectral dimension through the transformer encoder, thus effectively reducing spectral-spatial mutations, distortions, and variations of ground objects. Experimental results from an high-resolution multispectral (HR-MS) dataset, an high-resolution hyperspectral (HR-HS) dataset, and an high-resolution multitemporal (HR-MT) dataset covering complex urban scenarios confirm the effectiveness of the proposed approach compared to the main state-of-the-art methods. Notably, the proposed HFN can achieve satisfactory classification performance even with limited training samples. The source code will be made available at https://github.com/MissYongjie/Hybrid-FusionNet.
2024
Zheng, Y.; Liu, S.; Chen, H.; Bruzzone, L.
Hybrid FusionNet: A Hybrid Feature Fusion Framework for Multisource High-Resolution Remote Sensing Image Classification / Zheng, Y.; Liu, S.; Chen, H.; Bruzzone, L.. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 62:(2024), pp. 1-14. [10.1109/TGRS.2024.3352812]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/444077
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 6
  • OpenAlex ND
social impact