With advancements in unmanned aerial vehicle (UAV) remote sensing technology, remote sensing images have emerged as a critical source of research data across various domains, including agriculture, forestry, and environmental research. UAVs fitted with diverse spectral sensors are capable of capturing diverse image modalities, presenting both challenges and opportunities for image semantic segmentation technology. Most existing semantic segmentation networks excel in processing images captured by visible light cameras and often fail to segment images captured by unmanned aerial vehicles under low-light conditions due to insufficient lighting, reduced visual clarity, high noise levels, and uneven illumination. Thermal infrared imaging sensors can capture thermal radiation information, which has the potential to improve segmentation accuracy when integrated with visible images. In this study, we introduce a novel semantic segmentation processing framework, which evaluates different fusing methods and fuses visible and thermal infrared images. The framework employs a lightweight deep learning model and is designed for accurate semantic segmentation on the fused images. Experiments are conducted on images collected in our unmanned aerial vehicle flight experiments and a public night-time dataset to assess the performance of our proposed approach. Experimental results show that our proposed framework achieves state-of-the-art performance in semantic segmentation tasks in low-light conditions.

Semantic segmentation for UAV low-light scenes based on deep learning and thermal infrared image features / Nuradili, P.; Zhou, G.; Zhou, J.; Wang, Z.; Meng, Y.; Tang, W.; Melgani, F.. - In: INTERNATIONAL JOURNAL OF REMOTE SENSING. - ISSN 0143-1161. - 45:12(2024), pp. 4160-4177. [10.1080/01431161.2024.2357842]

Semantic segmentation for UAV low-light scenes based on deep learning and thermal infrared image features

Tang W.;Melgani F.
2024-01-01

Abstract

With advancements in unmanned aerial vehicle (UAV) remote sensing technology, remote sensing images have emerged as a critical source of research data across various domains, including agriculture, forestry, and environmental research. UAVs fitted with diverse spectral sensors are capable of capturing diverse image modalities, presenting both challenges and opportunities for image semantic segmentation technology. Most existing semantic segmentation networks excel in processing images captured by visible light cameras and often fail to segment images captured by unmanned aerial vehicles under low-light conditions due to insufficient lighting, reduced visual clarity, high noise levels, and uneven illumination. Thermal infrared imaging sensors can capture thermal radiation information, which has the potential to improve segmentation accuracy when integrated with visible images. In this study, we introduce a novel semantic segmentation processing framework, which evaluates different fusing methods and fuses visible and thermal infrared images. The framework employs a lightweight deep learning model and is designed for accurate semantic segmentation on the fused images. Experiments are conducted on images collected in our unmanned aerial vehicle flight experiments and a public night-time dataset to assess the performance of our proposed approach. Experimental results show that our proposed framework achieves state-of-the-art performance in semantic segmentation tasks in low-light conditions.
2024
12
Nuradili, P.; Zhou, G.; Zhou, J.; Wang, Z.; Meng, Y.; Tang, W.; Melgani, F.
Semantic segmentation for UAV low-light scenes based on deep learning and thermal infrared image features / Nuradili, P.; Zhou, G.; Zhou, J.; Wang, Z.; Meng, Y.; Tang, W.; Melgani, F.. - In: INTERNATIONAL JOURNAL OF REMOTE SENSING. - ISSN 0143-1161. - 45:12(2024), pp. 4160-4177. [10.1080/01431161.2024.2357842]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/437936
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact