Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.

Towards global scale segmentation with OpenStreetMap and remote sensing / Usmani, Munazza; Napolitano, Maurizio; Bovolo, Francesca. - In: ISPRS OPEN JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING. - ISSN 2667-3932. - 8:(2023), pp. 10003101-10003112. [10.1016/j.ophoto.2023.100031]

Towards global scale segmentation with OpenStreetMap and remote sensing

Usmani, Munazza;Napolitano, Maurizio;Bovolo, Francesca
2023-01-01

Abstract

Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.
2023
Usmani, Munazza; Napolitano, Maurizio; Bovolo, Francesca
Towards global scale segmentation with OpenStreetMap and remote sensing / Usmani, Munazza; Napolitano, Maurizio; Bovolo, Francesca. - In: ISPRS OPEN JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING. - ISSN 2667-3932. - 8:(2023), pp. 10003101-10003112. [10.1016/j.ophoto.2023.100031]
File in questo prodotto:
File Dimensione Formato  
Towards_global_scale_segmentation_with_OSM_and_RS.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.86 MB
Formato Adobe PDF
2.86 MB Adobe PDF   Visualizza/Apri
1-s2.0-S2667393223000029-main_compressed.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 907.32 kB
Formato Adobe PDF
907.32 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/372807
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact