Explainable artificial intelligence (XAI) approaches started to be studied in the last period to improve the interpretability of increasingly complex deep learning (DL) methods for remote sensing (RS) applications. Many XAI approaches are designed for supervised DL methods, but few interpret unsupervised models since they are challenging to explain due to the lack of semantic information. Change detection (CD) methods automatically identify changes between RS images acquired in a geographical area at different times. Among them, most of the CDs are unsupervised methods since gathering labeled multitemporal data is challenging. The unsupervised CD is an almost unexplored task for XAI, and so is the use of XAI for computational efficiency optimization. In this article, we propose an XAI approach for unsupervised CD tasks. The proposed method forces a convolutional autoencoder (CAE) to learn explainable hidden-layer features for CD using a greedy layerwise training that retains only the feature providing change information. It also automatically adapts the model depth according to the input image spatial resolution and spatial context information. Novel convolutional layers are added in the greedy layerwise process until they provide information according to the Kullback–Leibler (KL) divergence measure. In the case the new layer learns insufficient information, the process stops. A multiscale CD method retrieves the change maps. We test the interpretability of the proposed method on three CD datasets composed of bitemporal multispectral images acquired by Landsat-8 and Sentinel-2 for detecting burned and deforested areas.
XChange: An Explainable Dynamic Convolutional Autoencoder for Unsupervised Change Detection / Bergamasco, Luca; Bovolo, Francesca. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 63:(2025), pp. 1-13. [10.1109/tgrs.2025.3606297]
XChange: An Explainable Dynamic Convolutional Autoencoder for Unsupervised Change Detection
Bergamasco, Luca;Bovolo, Francesca
2025-01-01
Abstract
Explainable artificial intelligence (XAI) approaches started to be studied in the last period to improve the interpretability of increasingly complex deep learning (DL) methods for remote sensing (RS) applications. Many XAI approaches are designed for supervised DL methods, but few interpret unsupervised models since they are challenging to explain due to the lack of semantic information. Change detection (CD) methods automatically identify changes between RS images acquired in a geographical area at different times. Among them, most of the CDs are unsupervised methods since gathering labeled multitemporal data is challenging. The unsupervised CD is an almost unexplored task for XAI, and so is the use of XAI for computational efficiency optimization. In this article, we propose an XAI approach for unsupervised CD tasks. The proposed method forces a convolutional autoencoder (CAE) to learn explainable hidden-layer features for CD using a greedy layerwise training that retains only the feature providing change information. It also automatically adapts the model depth according to the input image spatial resolution and spatial context information. Novel convolutional layers are added in the greedy layerwise process until they provide information according to the Kullback–Leibler (KL) divergence measure. In the case the new layer learns insufficient information, the process stops. A multiscale CD method retrieves the change maps. We test the interpretability of the proposed method on three CD datasets composed of bitemporal multispectral images acquired by Landsat-8 and Sentinel-2 for detecting burned and deforested areas.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



