ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Volume V-4-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-4-2020, 65–70, 2020
https://doi.org/10.5194/isprs-annals-V-4-2020-65-2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-4-2020, 65–70, 2020
https://doi.org/10.5194/isprs-annals-V-4-2020-65-2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

  03 Aug 2020

03 Aug 2020

TRANSFER LEARNING FOR INDOOR OBJECT CLASSIFICATION: FROM IMAGES TO POINT CLOUDS

J. Balado1,2, L. Díaz-Vilariño1,2, E. Verbree2, and P. Arias1 J. Balado et al.
  • 1Universidade de Vigo, CINTECX, Applied Geotechnologies Research Group, Campus universitario de Vigo, As Lagoas, Marcosende 36310 Vigo, Spain
  • 2Delft University of Technology, Faculty of Architecture and the Built Environment, GIS Technology Section, 2628 BL Delft, The Netherlands

Keywords: Deep Learning, data augmentation, Convolutional Neural Networks, indoor environments, InceptionV3, multi-view

Abstract. Indoor furniture is of great relevance to building occupants in everyday life. Furniture occupies space in the building, gives comfort, establishes order in rooms and locates services and activities. Furniture is not always static; the rooms can be reorganized according to the needs. Keeping the building models up to date with the current furniture is key to work with indoor environments. Laser scanning technology can acquire indoor environments in a fast and precise way, and recent artificial intelligence techniques can classify correctly the objects that contain. The objective of this work is to study how to minimize the use of point cloud samples in Neural Network training, tedious to label, and replace them with images obtained from online sources. For this, point clouds are converted to images by means of rotations and projections. The conversion of a 3D vector data to a 2D raster allows the use of Convolutional Neural Networks, the achievement of several images for each acquired point cloud object and the combination with images obtained from online sources, such as Google Images. The images have been distributed among the validation and testing training sets following different percentages. The results show that, although point cloud images cannot be completely dispensed within the training set, only 10% of these achieve high accuracy in the classification.