Volume IV-2/W7
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W7, 197–204, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W7-197-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W7, 197–204, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W7-197-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  16 Sep 2019

16 Sep 2019

ADVERSARIAL DOMAIN ADAPTATION FOR THE CLASSIFICATION OF AERIAL IMAGES AND HEIGHT DATA USING CONVOLUTIONAL NEURAL NETWORKS

D. Wittich and F. Rottensteiner D. Wittich and F. Rottensteiner
  • Institute of Photogrammetry and GeoInformation, Leibniz Universiät Hannover, Germany

Keywords: Domain Adaptation, Segmentation, Classification, Fully Convolutional Networks

Abstract. Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier’s incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.