ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume V-3-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2020, 431–437, 2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2020, 431–437, 2020

  03 Aug 2020

03 Aug 2020


H. Bernsteiner1, N. Brožová2, I. Eischeid3, A. Hamer4, S. Haselberger5, M. Huber6, A. Kollert7, T. M. Vandyk8, and F. Pirotti9 H. Bernsteiner et al.
  • 1University of Bayreuth, Chair of Geomorphology, 95447 Bayreuth, Germany
  • 2WSL Institute for Snow and Avalanche Research SLF, Flüelastrasse 11, 7260 Davos Dorf, Switzerland
  • 3Department of Arctic and Marine Biology, UiT The Arctic University of Norway, 9037 Tromsø, Norwegian Polar Institute, Fram Center, Hjalmar Johansens gate 14, 9007 Tromsø, Norway, Aarhus University, Department of Bioscience, Denmark
  • 4University of Manchester, Geography, School of Environment Education & Development, Manchester M13 9PL, UK
  • 5University of Vienna, Institute of Geography and Regional Research, 1010 Vienna, Austria
  • 6Centre de Recherches Pétrographiques et Géochimiques, 15 rue Notre Dame des Pauvres, 54500 Vandoeuvre les Nancy, France
  • 7Institute for Interdisciplinary Mountain Research, Austrian Academy of Sciences, Technikerstr. 21a, 6020 Innsbruck, Austria
  • 8Department of Geography, Royal Holloway University of London, Egham, Surrey, TW200EX, UK
  • 9CIRGEO Interdepartmental Research Center in Geomatics, TeSAF Department, University of Padova, 35020 Legnaro (PD), Italy

Keywords: Terrestrial Photogrammetry, Structure from Motion, Surface Classification, Machine Learning, High Mountain Environment

Abstract. Increasingly advanced and affordable close-range sensing techniques are employed by an ever-broadening range of users, with varying competence and experience. In this context a method was tested that uses photogrammetry and classification by machine learning to divide a point cloud into different surface type classes. The study site is a peat scarp 20 metres long in the actively eroding river bank of the Rotmoos valley near Obergurgl, Austria. Imagery from near-infra red (NIR) and conventional (RGB) sensors, georeferenced with coordinates of targets surveyed with a total station, was used to create a point cloud using structure from motion and dense image matching. NIR and RGB information were merged into a single point cloud and 18 geometric features were extracted using three different radii (0.02 m, 0.05 m and 0.1 m) totalling 58 variables on which to apply the machine learning classification. Segments representing six classes, dry grass, green grass, peat, rock, snow and target, were extracted from the point cloud and split into a training set and a testing set. A Random Forest machine learning model was trained using machine learning packages in the R-CRAN environment. The overall classification accuracy and Kappa Index were 98% and 97% respectively. Rock, snow and target classes had the highest producer and user accuracies. Dry and green grass had the highest omission (1.9% and 5.6% respectively) and commission errors (3.3% and 3.4% respectively). Analysis of feature importance revealed that the spectral descriptors (NIR, R, G, B) were by far the most important determinants followed by verticality at 0.1 m radius.