The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, we propose a novel approach to incomplete multimodal learning in the context of remote sensing data fusion and the multimodal Transformer. This approach can be used in both supervised and self-supervised pretraining paradigms. It leverages the additional learned fusion tokens in combination with modality attention and masked self-attention mechanisms to collect multimodal signals in a multimodal Transformer. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pretraining, while allowing for random modality combinations as inputs in networ...
The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, we propose a novel approach to incomplete multimodal learning in the context of remote sensing data fusion and the multimodal Transformer. This approach can be used in both supervised and self-supervised pretraining paradigms. It leverages the additional learned fusion tokens in combination with modality attention and masked self-attention mechanisms to collect multimodal signals in a multimodal Transformer. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pretraining, while allowing for random modality combinations as inputs in network training. Experimental results show that the proposed method delivers state-of-the-art performance on two multimodal datasets for tasks, such as building instance/semantic segmentation and land-cover mapping when dealing with incomplete inputs during inference.
A Novel Approach to Incomplete Multimodal Learning for Remote Sensing Data Fusion / Chen, Yuxing; Zhao, Maofan; Bruzzone, Lorenzo. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 62:5404914(2024), pp. 1-14. [10.1109/TGRS.2024.3387837]
A Novel Approach to Incomplete Multimodal Learning for Remote Sensing Data Fusion
Yuxing Chen;Lorenzo Bruzzone
2024-01-01
Abstract
The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, we propose a novel approach to incomplete multimodal learning in the context of remote sensing data fusion and the multimodal Transformer. This approach can be used in both supervised and self-supervised pretraining paradigms. It leverages the additional learned fusion tokens in combination with modality attention and masked self-attention mechanisms to collect multimodal signals in a multimodal Transformer. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pretraining, while allowing for random modality combinations as inputs in networ...File | Dimensione | Formato | |
---|---|---|---|
TGRS3387837.pdf
Solo gestori archivio
Tipologia:
Post-print referato (Refereed author’s manuscript)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
5.35 MB
Formato
Adobe PDF
|
5.35 MB | Adobe PDF | Visualizza/Apri |
A_Novel_Approach_to_Incomplete_Multimodal_Learning_for_Remote_Sensing_Data_Fusion.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
4.23 MB
Formato
Adobe PDF
|
4.23 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione