ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume IV-2/W5
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 263–270, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-263-2019
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W5, 263–270, 2019
https://doi.org/10.5194/isprs-annals-IV-2-W5-263-2019

  29 May 2019

29 May 2019

AN RGB-D DATA PROCESSING FRAMEWORK BASED ON ENVIRONMENT CONSTRAINTS FOR MAPPING INDOOR ENVIRONMENTS

W. Darwish1,2, W. Li2, S. Tang3, Y. Li2, and W. Chen2 W. Darwish et al.
  • 1Vrije Universiteit Brussels, Department of Electronics and Informatics, 1050 Brussels, Belgium
  • 2The Hong Kong Polytechnic University, Department of Land Surveying and Geo-Informatics, Hung Hom, Hong Kong SAR, China
  • 3Shenzhen University, Shenzhen Key Laboratory of Spatial Smart Sensing and Services & The Key Laboratory for Geo-Environment Monitoring of Coastal Zone of the National Administration of Surveying, Mapping and GeoInformation, Shenzhen 518060, China

Keywords: RGB-D Sensor, Indoor Reconstruction, 3D features, SLAM, Constraint Mapping

Abstract. The adoption of RGB and depth (RGB-D) sensors for surveying applications (i.e., building information modeling [BIM], indoor navigation, and three-dimensional [3D] models) to replace expensive and time-consuming methods (e.g., stereo cameras, laser scanners) has recently attracted great attention. Due to the distinctive structure and scalability of indoor environments, the depth quality produced from RGB-D cameras and the simultaneous localization and mapping (SLAM) system responsible for the cameras pose estimation are substantial problems with existing RGB-D mapping systems. This study introduces a new RGB-D data processing framework that adopts two-dimensional and 3D features from RGB and depth images. To overcome the self-repetitive structure of indoor environments, the proposed framework uses novel description functions for both line and plane features extracted from RGB and depth images for further matching between successive RGB-D frame features. Also, the framework estimates the camera pose by minimizing the combined geometric distance of both two-dimensional and 3D features. Using the previously known structure of the indoor environment, the framework leverages the structural constraints to enhance 3D model precision. The framework also adopts a graph-based optimization technique to distribute the closure error to the graphs nodes and edges when a loop closure is detected. The visual RGB-D SLAM system and the default sensor tracking system (SensorFusion) were used to assess the performance of the proposed framework. The results show that the proposed framework can achieve significant improvement in 3D model accuracy.