ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume IV-2/W5
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 445–452, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-445-2019
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 445–452, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-445-2019

  29 May 2019

29 May 2019

POINTNET FOR THE AUTOMATIC CLASSIFICATION OF AERIAL POINT CLOUDS

M. Soilán1, R. Lindenbergh2, B. Riveiro1, and A. Sánchez-Rodríguez1 M. Soilán et al.
  • 1Dept. of Materials Engineering, Applied Mechanics and Construction, School of Industrial Engineering, Univeristy of Vigo, 36310, Vigo, Spain
  • 2Dept. of Geoscience & Remote Sensing, Faculty of Civil Engineering and Geosciences, Delft University of Technology, 2628 CN Delft, The Netherlands

Keywords: Aerial Laser Scanner, Point Cloud Classification, Deep Learning, Semantic Segmentation

Abstract. During the last couple of years, there has been an increased interest to develop new deep learning networks specifically for processing 3D point cloud data. In that context, this work intends to expand the applicability of one of these networks, PointNet, from the semantic segmentation of indoor scenes, to outdoor point clouds acquired with Airborne Laser Scanning (ALS) systems. Our goal is to of assist the classification of future iterations of a national wide dataset such as the Actueel Hoogtebestand Nederland (AHN), using a classification model trained with a previous iteration. First, a simple application such as ground classification is proposed in order to prove the capabilities of the proposed deep learning architecture to perform an efficient point-wise classification with aerial point clouds. Then, two different models based on PointNet are defined to classify the most relevant elements in the case study data: Ground, vegetation and buildings. While the model for ground classification performs with a F-score metric above 96%, motivating the second part of the work, the overall accuracy of the remaining models is around 87%, showing consistency across different versions of AHN but with improvable false positive and false negative rates. Therefore, this work concludes that the proposed classification of future AHN iterations is feasible but needs more experimentation.