In this paper, we present a domain adaptation network to deal with classification scenarios subjected to the data shift problem (i.e., labeled and unlabeled images acquired with different sensors and over completely different geographical areas). We rely on the power of pretrained convolutional neural networks (CNNs) to generate an initial feature representation of the labeled and unlabeled images under analysis, referred as source and target domains, respectively. Then we feed the resulting features to an extra network placed on the top of the pretrained CNN for further learning. During the fine-tuning phase, we learn the weights of this network by jointly minimizing three regularization terms, which are: 1) the cross-entropy error on the labeled source data; 2) the maximum mean discrepancy between the source and target data distributions; and 3) the geometrical structure of the target data. Furthermore, to obtain robust hidden representations we propose a mini-batch gradient-based optimization method with a dynamic sample size for the local alignment of the source and target distributions. To validate the method, in the experiments we use the University of California Merced data set and a new multisensor data set acquired over several regions of the Kingdom of Saudi Arabia. The experiments show that: 1) pretrained CNNs offer an interesting solution for image classification compared to state-of-the-art methods; 2) their performances can be degraded when dealing with data sets subjected to the data shift problem; and 3) how the proposed approach represents a promising solution for effectively handling this issue.
Domain Adaptation Network for Cross-Scene Classification / Othman, Esam; Bazi, Yakoub; Melgani, Farid; Alhichri, Haikel; Alajlan, Naif; Zuair, Mansour. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 55:8(2017), pp. 4441-4456. [10.1109/TGRS.2017.2692281]
Domain Adaptation Network for Cross-Scene Classification
Bazi, Yakoub;Melgani, Farid;
2017-01-01
Abstract
In this paper, we present a domain adaptation network to deal with classification scenarios subjected to the data shift problem (i.e., labeled and unlabeled images acquired with different sensors and over completely different geographical areas). We rely on the power of pretrained convolutional neural networks (CNNs) to generate an initial feature representation of the labeled and unlabeled images under analysis, referred as source and target domains, respectively. Then we feed the resulting features to an extra network placed on the top of the pretrained CNN for further learning. During the fine-tuning phase, we learn the weights of this network by jointly minimizing three regularization terms, which are: 1) the cross-entropy error on the labeled source data; 2) the maximum mean discrepancy between the source and target data distributions; and 3) the geometrical structure of the target data. Furthermore, to obtain robust hidden representations we propose a mini-batch gradient-based optimization method with a dynamic sample size for the local alignment of the source and target distributions. To validate the method, in the experiments we use the University of California Merced data set and a new multisensor data set acquired over several regions of the Kingdom of Saudi Arabia. The experiments show that: 1) pretrained CNNs offer an interesting solution for image classification compared to state-of-the-art methods; 2) their performances can be degraded when dealing with data sets subjected to the data shift problem; and 3) how the proposed approach represents a promising solution for effectively handling this issue.File | Dimensione | Formato | |
---|---|---|---|
TGARS_2017_Domain Adaptation.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
4.57 MB
Formato
Adobe PDF
|
4.57 MB | Adobe PDF | Visualizza/Apri |
Domain adaptation... Postprint.pdf
accesso aperto
Tipologia:
Post-print referato (Refereed author’s manuscript)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.91 MB
Formato
Adobe PDF
|
2.91 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione