02 Jun 2016
02 Jun 2016
ENHANCED RGB-D MAPPING METHOD FOR DETAILED 3D MODELING OF LARGE INDOOR ENVIRONMENTS
Shengjun Tang1,4,5, Qing Zhu1,2,3,4, Wu Chen5, Walid Darwish5, Bo Wu5, Han Hu5, and Min Chen3
Shengjun Tang et al.
Shengjun Tang1,4,5, Qing Zhu1,2,3,4, Wu Chen5, Walid Darwish5, Bo Wu5, Han Hu5, and Min Chen3
- 1State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan, Hubei, China
- 2State-Province Joint Engineering Laboratory of Spatial Information Technology for High Speed Railway Safety, Chengdu, Sichuan, China
- 3Faculty of Geosciences and Environmental Engineering of Southwest Jiaotong University, Chengdu, Sichuan, China
- 4Collaborative Innovation Center for Geospatial Technology, 129 Luoyu Road, Wuhan, Hubei, China
- 5Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
- 1State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan, Hubei, China
- 2State-Province Joint Engineering Laboratory of Spatial Information Technology for High Speed Railway Safety, Chengdu, Sichuan, China
- 3Faculty of Geosciences and Environmental Engineering of Southwest Jiaotong University, Chengdu, Sichuan, China
- 4Collaborative Innovation Center for Geospatial Technology, 129 Luoyu Road, Wuhan, Hubei, China
- 5Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Hide author details
Keywords: Indoor Modeling, RGB-D Camera, Depth, Image, Camera Pose, Registration
RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method