ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-3-2021
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2021, 181–189, 2021
https://doi.org/10.5194/isprs-annals-V-3-2021-181-2021
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2021, 181–189, 2021
https://doi.org/10.5194/isprs-annals-V-3-2021-181-2021

  17 Jun 2021

17 Jun 2021

INVESTIGATIONS ON FEATURE SIMILARITY AND THE IMPACT OF TRAINING DATA FOR LAND COVER CLASSIFICATION

M. Voelsen1, D. Lobo Torres2, R. Q. Feitosa2, F. Rottensteiner1, and C. Heipke1 M. Voelsen et al.
  • 1Institute of Photogrammetry and GeoInformation, Leibniz Universität Hannover, Germany
  • 2Department of Electrical Engineering, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil

Keywords: land cover classification, remote sensing, FCN, cosine similarity loss

Abstract. Fully convolutional neural networks (FCN) are successfully used for pixel-wise land cover classification - the task of identifying the physical material of the Earth’s surface for every pixel in an image. The acquisition of large training datasets is challenging, especially in remote sensing, but necessary for a FCN to perform well. One way to circumvent manual labelling is the usage of existing databases, which usually contain a certain amount of label noise when combined with another data source. As a first part of this work, we investigate the impact of training data on a FCN. We experiment with different amounts of training data, varying w.r.t. the covered area, the available acquisition dates and the amount of label noise. We conclude that the more data is used for training, the better is the generalization performance of the model, and the FCN is able to mitigate the effect of label noise to a high degree. Another challenge is the imbalanced class distribution in most real-world datasets, which can cause the classifier to focus on the majority classes, leading to poor classification performance for minority classes. To tackle this problem, in this paper, we use the cosine similarity loss to force feature vectors of the same class to be close to each other in feature space. Our experiments show that the cosine loss helps to obtain more similar feature vectors, but the similarity of the cluster centers also increases.