Volume IV-2/W4
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W4, 295-302, 2017
https://doi.org/10.5194/isprs-annals-IV-2-W4-295-2017
© Author(s) 2017. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W4, 295-302, 2017
https://doi.org/10.5194/isprs-annals-IV-2-W4-295-2017
© Author(s) 2017. This work is distributed under
the Creative Commons Attribution 4.0 License.

  13 Sep 2017

13 Sep 2017

A PRELIMINARY WORK ON LAYOUT SLAM FOR RECONSTRUCTION OF INDOOR CORRIDOR ENVIRONMENTS

A. Baligh Jahromi1, G. Sohn1, M. Shahbazi2, and J. Kang1 A. Baligh Jahromi et al.
  • 1GeoICT Laboratory, Department of Earth, Space Science and Engineering, York University, 4700 Keele Street, Toronto, Ontario, Canada M3J 1P3
  • 2Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, Alberta, Canada T2N 1N4

Keywords: Indoor Layout Reconstruction, Visual SLAM, Point Feature, Gaussian Sphere, Matching

Abstract. We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.