ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume VIII-4/W2-2021
https://doi.org/10.5194/isprs-annals-VIII-4-W2-2021-67-2021
https://doi.org/10.5194/isprs-annals-VIII-4-W2-2021-67-2021
07 Oct 2021
 | 07 Oct 2021

USING SIMULATION DATA FROM GAMING ENVIRONMENTS FOR TRAINING A DEEP LEARNING ALGORITHM ON 3D POINT CLOUDS

S. Spiegel and J. Chen

Keywords: deep learning, point clouds, computer vision, gaming engines

Abstract. Deep neural networks (DNNs) and convolutional neural networks (CNNs) have demonstrated greater robustness and accuracy in classifying two-dimensional images and three-dimensional point clouds compared to more traditional machine learning approaches. However, their main drawback is the need for large quantities of semantically labeled training data sets, which are often out of reach for those with resource constraints. In this study, we evaluated the use of simulated 3D point clouds for training a CNN learning algorithm to segment and classify 3D point clouds of real-world urban environments. The simulation involved collecting light detection and ranging (LiDAR) data using a simulated 16 channel laser scanner within the the CARLA (Car Learning to Act) autonomous vehicle gaming environment. We used this labeled data to train the Kernel Point Convolution (KPConv) and KPConv Segmentation Network for Point Clouds (KP-FCNN), which we tested on real-world LiDAR data from the NPM3D benchmark data set. Our results showed that high accuracy can be achieved using data collected in a simulator.