ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Volume IV-2/W5
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 285–292, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-285-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 285–292, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-285-2019
© Author(s) 2019. This work is distributed under
the Creative Commons Attribution 4.0 License.

  29 May 2019

29 May 2019

INDOOR 3D MODELING AND FLEXIBLE SPACE SUBDIVISION FROM POINT CLOUDS

S. Nikoohemat1, A. Diakité2, S. Zlatanova2, and G. Vosselman1 S. Nikoohemat et al.
  • 1Dept. of Earth Observation Science, Faculty ITC, University of Twente, Enschede, The Netherlands
  • 2Dept. of Built Environment, University of New South Wales, Sydney, Australia

Keywords: Point cloud Processing, LIDAR reconstruction, Indoor 3D modelling, Indoor space subdivision, Indoor navigation

Abstract. Indoor navigation can be a tedious process in a complex and unknown environment. It gets more critical when the first responders try to intervene in a big building after a disaster has occurred. For such cases, an accurate map of the building is among the best supports possible. Unfortunately, such a map is not always available, or generally outdated and imprecise, leading to error prone decisions. Thanks to advances in the laser scanning, accurate 3D maps can be built in relatively small amount of time using all sort of laser scanners (stationary, mobile, drone), although the information they provide is generally an unstructured point cloud. While most of the existing approaches try to extensively process the point cloud in order to produce an accurate architectural model of the scanned building, similar to a Building Information Model (BIM), we have adopted a space-focused approach. This paper presents our framework that starts from point-clouds of complex indoor environments, performs advanced processes to identify the 3D structures critical to navigation and path planning, and provides fine-grained navigation networks that account for obstacles and spatial accessibility of the navigating agents. The method involves generating a volumetric-wall vector model from the point cloud, identifying the obstacles and extracting the navigable 3D spaces. Our work contributes a new approach for space subdivision without the need of using laser scanner positions or viewpoints. Unlike 2D cell decomposition or a binary space partitioning, this work introduces a space enclosure method to deal with 3D space extraction and non-Manhattan World architecture. The results show more than 90% of spaces are correctly extracted. The approach is tested on several real buildings and relies on the latest advances in indoor navigation.