ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-2-2022
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2022, 259–266, 2022
https://doi.org/10.5194/isprs-annals-V-2-2022-259-2022
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2022, 259–266, 2022
https://doi.org/10.5194/isprs-annals-V-2-2022-259-2022
 
17 May 2022
17 May 2022

LEARNING FROM THE PAST: CROWD-DRIVEN ACTIVE TRANSFER LEARNING FOR SEMANTIC SEGMENTATION OF MULTI-TEMPORAL 3D POINT CLOUDS

M. Kölle, V. Walter, and U. Soergel M. Kölle et al.
  • Institute for Photogrammetry, University of Stuttgart, Germany

Keywords: Active Learning, Transfer Learning, Domain Adaptation, Crowdsourcing, Multi-Temporality, 3D Point Clouds, Semantic Segmentation

Abstract. The main bottleneck of machine learning systems, such as convolutional neural networks, is the availability of labeled training data. Hence, much effort (and thus cost) is caused by setting up proper training data sets. However, models trained on specific data sets often perform unsatisfactorily when used to derive predictions for another (yet related) data set. We aim to overcome this problem by employing active learning to iteratively adapt an existing classifier to another domain. Precisely, we are concerned with semantic segmentation of 3D point clouds of multiple epochs. We first establish a Random Forest classifier for the first epoch of our data set and adapt it for successful prediction to two more temporally disjoint point clouds of the same but extended area. The point clouds, which are part of the newly introduced Hessigheim 3D benchmark data set, incorporate different characteristics with respect to the acquisition date and sensor configuration. We demonstrate that our workflow for domain adaptation is designed in such a way that it i) offers the possibility to greatly reduce labeling effort compared to a passive learning baseline or to an active learning baseline trained from scratch, if the domain gap is small enough and ii) at least does not cause more expenses (compared to a newly initialized active learning loop), if the domain gap is severe. The latter is especially beneficial in scenarios where the similarity of two different domains is hard to assess.