Volume IV-2
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2, 89-96, 2018
https://doi.org/10.5194/isprs-annals-IV-2-89-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2, 89-96, 2018
https://doi.org/10.5194/isprs-annals-IV-2-89-2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.

  28 May 2018

28 May 2018

SATELLITE IMAGE CLASSIFICATION OF BUILDING DAMAGES USING AIRBORNE AND SATELLITE IMAGE SAMPLES IN A DEEP LEARNING APPROACH

D. Duarte, F. Nex, N. Kerle, and G. Vosselman D. Duarte et al.
  • Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede, the Netherlands

Keywords: multi-resolution, dilated convolutions, residual connections, multi-scale, multi-platform, machine learning, UAV, earthquake

Abstract. The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4% the overall satellite image classification of building damages.