Volume IV-4
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-4, 155-162, 2018
https://doi.org/10.5194/isprs-annals-IV-4-155-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-4, 155-162, 2018
https://doi.org/10.5194/isprs-annals-IV-4-155-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  19 Sep 2018

19 Sep 2018

A DEEP LEARNING STUDY OF EXTRACTING NAVIGATION AREA FROM CAD BLUEPRINTS

L. Niu1, Y. Q. Song2, J. Su1, and H. M. Zhang1 L. Niu et al.
  • 1School of Surveying and Urban Spatial Information, Henan University of Urban Construction, 467036 Pingdingshan, China
  • 2School of Geographic and Environmental Science, Normal University of Tianjin, West Bin Shui Avenue, 300387 Tianjin, China

Keywords: Deep learning, Extraction, Navigation area, CAD blueprints

Abstract. Deep learning technology is a cutting edge topic of AI region, and draws more attention from photogrammetry and remote sensing society. In this study, we strive to combine deep learning with CAD designs to extract navigation area (room). To this, we mark more than 200 2D building blueprint in CAD forms to construct the learning set to train object detection model based on TensorFlow. This model is the faster R-CNN inception v2 model from COCO dataset. The test and result section is composed of three parts: First part demonstrates the model performance on learning dataset; second part applies the generated model to extract rooms from untrained raw CAD blueprints; Third part covers the comparison between deep learning extracted result and geometric based algorithm extracted result. Test result shows that the deep learning approach could achieve higher accuracy than geometric approach under regular shape situations. In conclusion, we have proposed a well-trained deep learning model that could be utilized to construct a schema of the navigation area for 2D CAD blueprints.